Jan 22 07:49:11 np0005592158 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 07:49:11 np0005592158 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 07:49:11 np0005592158 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592158 kernel: BIOS-provided physical RAM map:
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 07:49:11 np0005592158 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 22 07:49:11 np0005592158 kernel: NX (Execute Disable) protection: active
Jan 22 07:49:11 np0005592158 kernel: APIC: Static calls initialized
Jan 22 07:49:11 np0005592158 kernel: SMBIOS 2.8 present.
Jan 22 07:49:11 np0005592158 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 22 07:49:11 np0005592158 kernel: Hypervisor detected: KVM
Jan 22 07:49:11 np0005592158 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 07:49:11 np0005592158 kernel: kvm-clock: using sched offset of 5016221411 cycles
Jan 22 07:49:11 np0005592158 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 07:49:11 np0005592158 kernel: tsc: Detected 2799.998 MHz processor
Jan 22 07:49:11 np0005592158 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 22 07:49:11 np0005592158 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 07:49:11 np0005592158 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 07:49:11 np0005592158 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 22 07:49:11 np0005592158 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 22 07:49:11 np0005592158 kernel: Using GB pages for direct mapping
Jan 22 07:49:11 np0005592158 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 07:49:11 np0005592158 kernel: ACPI: Early table checksum verification disabled
Jan 22 07:49:11 np0005592158 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 22 07:49:11 np0005592158 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592158 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592158 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592158 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 22 07:49:11 np0005592158 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592158 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592158 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 22 07:49:11 np0005592158 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 22 07:49:11 np0005592158 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 22 07:49:11 np0005592158 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 22 07:49:11 np0005592158 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 22 07:49:11 np0005592158 kernel: No NUMA configuration found
Jan 22 07:49:11 np0005592158 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592158 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 22 07:49:11 np0005592158 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 22 07:49:11 np0005592158 kernel: Zone ranges:
Jan 22 07:49:11 np0005592158 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 07:49:11 np0005592158 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 07:49:11 np0005592158 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592158 kernel:  Device   empty
Jan 22 07:49:11 np0005592158 kernel: Movable zone start for each node
Jan 22 07:49:11 np0005592158 kernel: Early memory node ranges
Jan 22 07:49:11 np0005592158 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 07:49:11 np0005592158 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 22 07:49:11 np0005592158 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592158 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 22 07:49:11 np0005592158 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 07:49:11 np0005592158 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 07:49:11 np0005592158 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 07:49:11 np0005592158 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 07:49:11 np0005592158 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 07:49:11 np0005592158 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 07:49:11 np0005592158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 07:49:11 np0005592158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 07:49:11 np0005592158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 07:49:11 np0005592158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 07:49:11 np0005592158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 07:49:11 np0005592158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 07:49:11 np0005592158 kernel: TSC deadline timer available
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Max. logical packages:   8
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Max. logical dies:       8
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Max. dies per package:   1
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Max. threads per core:   1
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Num. cores per package:     1
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Num. threads per package:   1
Jan 22 07:49:11 np0005592158 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 22 07:49:11 np0005592158 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 07:49:11 np0005592158 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 07:49:11 np0005592158 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 22 07:49:11 np0005592158 kernel: Booting paravirtualized kernel on KVM
Jan 22 07:49:11 np0005592158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 07:49:11 np0005592158 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 22 07:49:11 np0005592158 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 22 07:49:11 np0005592158 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 22 07:49:11 np0005592158 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592158 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 07:49:11 np0005592158 kernel: random: crng init done
Jan 22 07:49:11 np0005592158 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: Fallback order for Node 0: 0 
Jan 22 07:49:11 np0005592158 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 07:49:11 np0005592158 kernel: Policy zone: Normal
Jan 22 07:49:11 np0005592158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 07:49:11 np0005592158 kernel: software IO TLB: area num 8.
Jan 22 07:49:11 np0005592158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 07:49:11 np0005592158 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 07:49:11 np0005592158 kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 07:49:11 np0005592158 kernel: Dynamic Preempt: voluntary
Jan 22 07:49:11 np0005592158 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 07:49:11 np0005592158 kernel: rcu: #011RCU event tracing is enabled.
Jan 22 07:49:11 np0005592158 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 22 07:49:11 np0005592158 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592158 kernel: #011Rude variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592158 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 07:49:11 np0005592158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 22 07:49:11 np0005592158 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592158 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592158 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592158 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 22 07:49:11 np0005592158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 07:49:11 np0005592158 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 07:49:11 np0005592158 kernel: Console: colour VGA+ 80x25
Jan 22 07:49:11 np0005592158 kernel: printk: console [ttyS0] enabled
Jan 22 07:49:11 np0005592158 kernel: ACPI: Core revision 20230331
Jan 22 07:49:11 np0005592158 kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 07:49:11 np0005592158 kernel: x2apic enabled
Jan 22 07:49:11 np0005592158 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 07:49:11 np0005592158 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 07:49:11 np0005592158 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 22 07:49:11 np0005592158 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 07:49:11 np0005592158 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 07:49:11 np0005592158 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 07:49:11 np0005592158 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 07:49:11 np0005592158 kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 07:49:11 np0005592158 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 07:49:11 np0005592158 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 22 07:49:11 np0005592158 kernel: RETBleed: Mitigation: untrained return thunk
Jan 22 07:49:11 np0005592158 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 07:49:11 np0005592158 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 07:49:11 np0005592158 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 07:49:11 np0005592158 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 07:49:11 np0005592158 kernel: x86/bugs: return thunk changed
Jan 22 07:49:11 np0005592158 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 07:49:11 np0005592158 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 07:49:11 np0005592158 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 07:49:11 np0005592158 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 07:49:11 np0005592158 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 07:49:11 np0005592158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 22 07:49:11 np0005592158 kernel: Freeing SMP alternatives memory: 40K
Jan 22 07:49:11 np0005592158 kernel: pid_max: default: 32768 minimum: 301
Jan 22 07:49:11 np0005592158 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 07:49:11 np0005592158 kernel: landlock: Up and running.
Jan 22 07:49:11 np0005592158 kernel: Yama: becoming mindful.
Jan 22 07:49:11 np0005592158 kernel: SELinux:  Initializing.
Jan 22 07:49:11 np0005592158 kernel: LSM support for eBPF active
Jan 22 07:49:11 np0005592158 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 22 07:49:11 np0005592158 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 07:49:11 np0005592158 kernel: ... version:                0
Jan 22 07:49:11 np0005592158 kernel: ... bit width:              48
Jan 22 07:49:11 np0005592158 kernel: ... generic registers:      6
Jan 22 07:49:11 np0005592158 kernel: ... value mask:             0000ffffffffffff
Jan 22 07:49:11 np0005592158 kernel: ... max period:             00007fffffffffff
Jan 22 07:49:11 np0005592158 kernel: ... fixed-purpose events:   0
Jan 22 07:49:11 np0005592158 kernel: ... event mask:             000000000000003f
Jan 22 07:49:11 np0005592158 kernel: signal: max sigframe size: 1776
Jan 22 07:49:11 np0005592158 kernel: rcu: Hierarchical SRCU implementation.
Jan 22 07:49:11 np0005592158 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 22 07:49:11 np0005592158 kernel: smp: Bringing up secondary CPUs ...
Jan 22 07:49:11 np0005592158 kernel: smpboot: x86: Booting SMP configuration:
Jan 22 07:49:11 np0005592158 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 22 07:49:11 np0005592158 kernel: smp: Brought up 1 node, 8 CPUs
Jan 22 07:49:11 np0005592158 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 22 07:49:11 np0005592158 kernel: node 0 deferred pages initialised in 15ms
Jan 22 07:49:11 np0005592158 kernel: Memory: 7763684K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 22 07:49:11 np0005592158 kernel: devtmpfs: initialized
Jan 22 07:49:11 np0005592158 kernel: x86/mm: Memory block size: 128MB
Jan 22 07:49:11 np0005592158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 07:49:11 np0005592158 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 22 07:49:11 np0005592158 kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 07:49:11 np0005592158 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 07:49:11 np0005592158 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 07:49:11 np0005592158 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 07:49:11 np0005592158 kernel: audit: initializing netlink subsys (disabled)
Jan 22 07:49:11 np0005592158 kernel: audit: type=2000 audit(1769086148.638:1): state=initialized audit_enabled=0 res=1
Jan 22 07:49:11 np0005592158 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 07:49:11 np0005592158 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 07:49:11 np0005592158 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 07:49:11 np0005592158 kernel: cpuidle: using governor menu
Jan 22 07:49:11 np0005592158 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 07:49:11 np0005592158 kernel: PCI: Using configuration type 1 for base access
Jan 22 07:49:11 np0005592158 kernel: PCI: Using configuration type 1 for extended access
Jan 22 07:49:11 np0005592158 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 07:49:11 np0005592158 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 07:49:11 np0005592158 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 07:49:11 np0005592158 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 07:49:11 np0005592158 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 07:49:11 np0005592158 kernel: Demotion targets for Node 0: null
Jan 22 07:49:11 np0005592158 kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 07:49:11 np0005592158 kernel: ACPI: Added _OSI(Module Device)
Jan 22 07:49:11 np0005592158 kernel: ACPI: Added _OSI(Processor Device)
Jan 22 07:49:11 np0005592158 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 07:49:11 np0005592158 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 07:49:11 np0005592158 kernel: ACPI: Interpreter enabled
Jan 22 07:49:11 np0005592158 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 22 07:49:11 np0005592158 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 07:49:11 np0005592158 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 07:49:11 np0005592158 kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 07:49:11 np0005592158 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 07:49:11 np0005592158 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [3] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [4] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [5] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [6] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [7] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [8] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [9] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [10] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [11] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [12] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [13] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [14] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [15] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [16] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [17] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [18] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [19] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [20] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [21] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [22] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [23] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [24] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [25] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [26] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [27] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [28] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [29] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [30] registered
Jan 22 07:49:11 np0005592158 kernel: acpiphp: Slot [31] registered
Jan 22 07:49:11 np0005592158 kernel: PCI host bridge to bus 0000:00
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 07:49:11 np0005592158 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 22 07:49:11 np0005592158 kernel: iommu: Default domain type: Translated
Jan 22 07:49:11 np0005592158 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 07:49:11 np0005592158 kernel: SCSI subsystem initialized
Jan 22 07:49:11 np0005592158 kernel: ACPI: bus type USB registered
Jan 22 07:49:11 np0005592158 kernel: usbcore: registered new interface driver usbfs
Jan 22 07:49:11 np0005592158 kernel: usbcore: registered new interface driver hub
Jan 22 07:49:11 np0005592158 kernel: usbcore: registered new device driver usb
Jan 22 07:49:11 np0005592158 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 07:49:11 np0005592158 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 07:49:11 np0005592158 kernel: PTP clock support registered
Jan 22 07:49:11 np0005592158 kernel: EDAC MC: Ver: 3.0.0
Jan 22 07:49:11 np0005592158 kernel: NetLabel: Initializing
Jan 22 07:49:11 np0005592158 kernel: NetLabel:  domain hash size = 128
Jan 22 07:49:11 np0005592158 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 07:49:11 np0005592158 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 07:49:11 np0005592158 kernel: PCI: Using ACPI for IRQ routing
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 07:49:11 np0005592158 kernel: vgaarb: loaded
Jan 22 07:49:11 np0005592158 kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 07:49:11 np0005592158 kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 07:49:11 np0005592158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 07:49:11 np0005592158 kernel: pnp: PnP ACPI init
Jan 22 07:49:11 np0005592158 kernel: pnp: PnP ACPI: found 5 devices
Jan 22 07:49:11 np0005592158 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_INET protocol family
Jan 22 07:49:11 np0005592158 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 07:49:11 np0005592158 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_XDP protocol family
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 22 07:49:11 np0005592158 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 22 07:49:11 np0005592158 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 22 07:49:11 np0005592158 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73738 usecs
Jan 22 07:49:11 np0005592158 kernel: PCI: CLS 0 bytes, default 64
Jan 22 07:49:11 np0005592158 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 07:49:11 np0005592158 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 22 07:49:11 np0005592158 kernel: ACPI: bus type thunderbolt registered
Jan 22 07:49:11 np0005592158 kernel: Trying to unpack rootfs image as initramfs...
Jan 22 07:49:11 np0005592158 kernel: Initialise system trusted keyrings
Jan 22 07:49:11 np0005592158 kernel: Key type blacklist registered
Jan 22 07:49:11 np0005592158 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 07:49:11 np0005592158 kernel: zbud: loaded
Jan 22 07:49:11 np0005592158 kernel: integrity: Platform Keyring initialized
Jan 22 07:49:11 np0005592158 kernel: integrity: Machine keyring initialized
Jan 22 07:49:11 np0005592158 kernel: Freeing initrd memory: 87956K
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_ALG protocol family
Jan 22 07:49:11 np0005592158 kernel: xor: automatically using best checksumming function   avx       
Jan 22 07:49:11 np0005592158 kernel: Key type asymmetric registered
Jan 22 07:49:11 np0005592158 kernel: Asymmetric key parser 'x509' registered
Jan 22 07:49:11 np0005592158 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 07:49:11 np0005592158 kernel: io scheduler mq-deadline registered
Jan 22 07:49:11 np0005592158 kernel: io scheduler kyber registered
Jan 22 07:49:11 np0005592158 kernel: io scheduler bfq registered
Jan 22 07:49:11 np0005592158 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 07:49:11 np0005592158 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 07:49:11 np0005592158 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 07:49:11 np0005592158 kernel: ACPI: button: Power Button [PWRF]
Jan 22 07:49:11 np0005592158 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 22 07:49:11 np0005592158 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 22 07:49:11 np0005592158 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 22 07:49:11 np0005592158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 07:49:11 np0005592158 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 07:49:11 np0005592158 kernel: Non-volatile memory driver v1.3
Jan 22 07:49:11 np0005592158 kernel: rdac: device handler registered
Jan 22 07:49:11 np0005592158 kernel: hp_sw: device handler registered
Jan 22 07:49:11 np0005592158 kernel: emc: device handler registered
Jan 22 07:49:11 np0005592158 kernel: alua: device handler registered
Jan 22 07:49:11 np0005592158 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 22 07:49:11 np0005592158 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 22 07:49:11 np0005592158 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 22 07:49:11 np0005592158 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 22 07:49:11 np0005592158 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 07:49:11 np0005592158 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 07:49:11 np0005592158 kernel: usb usb1: Product: UHCI Host Controller
Jan 22 07:49:11 np0005592158 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 07:49:11 np0005592158 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 22 07:49:11 np0005592158 kernel: hub 1-0:1.0: USB hub found
Jan 22 07:49:11 np0005592158 kernel: hub 1-0:1.0: 2 ports detected
Jan 22 07:49:11 np0005592158 kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 07:49:11 np0005592158 kernel: usbserial: USB Serial support registered for generic
Jan 22 07:49:11 np0005592158 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 07:49:11 np0005592158 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 07:49:11 np0005592158 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 07:49:11 np0005592158 kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 07:49:11 np0005592158 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 22 07:49:11 np0005592158 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 07:49:11 np0005592158 kernel: rtc_cmos 00:04: registered as rtc0
Jan 22 07:49:11 np0005592158 kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T12:49:10 UTC (1769086150)
Jan 22 07:49:11 np0005592158 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 22 07:49:11 np0005592158 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 07:49:11 np0005592158 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 07:49:11 np0005592158 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 07:49:11 np0005592158 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 07:49:11 np0005592158 kernel: usbcore: registered new interface driver usbhid
Jan 22 07:49:11 np0005592158 kernel: usbhid: USB HID core driver
Jan 22 07:49:11 np0005592158 kernel: drop_monitor: Initializing network drop monitor service
Jan 22 07:49:11 np0005592158 kernel: Initializing XFRM netlink socket
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_INET6 protocol family
Jan 22 07:49:11 np0005592158 kernel: Segment Routing with IPv6
Jan 22 07:49:11 np0005592158 kernel: NET: Registered PF_PACKET protocol family
Jan 22 07:49:11 np0005592158 kernel: mpls_gso: MPLS GSO support
Jan 22 07:49:11 np0005592158 kernel: IPI shorthand broadcast: enabled
Jan 22 07:49:11 np0005592158 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 07:49:11 np0005592158 kernel: AES CTR mode by8 optimization enabled
Jan 22 07:49:11 np0005592158 kernel: sched_clock: Marking stable (2892043577, 148805853)->(3262543804, -221694374)
Jan 22 07:49:11 np0005592158 kernel: registered taskstats version 1
Jan 22 07:49:11 np0005592158 kernel: Loading compiled-in X.509 certificates
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 07:49:11 np0005592158 kernel: Demotion targets for Node 0: null
Jan 22 07:49:11 np0005592158 kernel: page_owner is disabled
Jan 22 07:49:11 np0005592158 kernel: Key type .fscrypt registered
Jan 22 07:49:11 np0005592158 kernel: Key type fscrypt-provisioning registered
Jan 22 07:49:11 np0005592158 kernel: Key type big_key registered
Jan 22 07:49:11 np0005592158 kernel: Key type encrypted registered
Jan 22 07:49:11 np0005592158 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 07:49:11 np0005592158 kernel: Loading compiled-in module X.509 certificates
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:49:11 np0005592158 kernel: ima: Allocated hash algorithm: sha256
Jan 22 07:49:11 np0005592158 kernel: ima: No architecture policies found
Jan 22 07:49:11 np0005592158 kernel: evm: Initialising EVM extended attributes:
Jan 22 07:49:11 np0005592158 kernel: evm: security.selinux
Jan 22 07:49:11 np0005592158 kernel: evm: security.SMACK64 (disabled)
Jan 22 07:49:11 np0005592158 kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 07:49:11 np0005592158 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 07:49:11 np0005592158 kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 07:49:11 np0005592158 kernel: evm: security.apparmor (disabled)
Jan 22 07:49:11 np0005592158 kernel: evm: security.ima
Jan 22 07:49:11 np0005592158 kernel: evm: security.capability
Jan 22 07:49:11 np0005592158 kernel: evm: HMAC attrs: 0x1
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 07:49:11 np0005592158 kernel: Running certificate verification RSA selftest
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 07:49:11 np0005592158 kernel: Running certificate verification ECDSA selftest
Jan 22 07:49:11 np0005592158 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 07:49:11 np0005592158 kernel: clk: Disabling unused clocks
Jan 22 07:49:11 np0005592158 kernel: Freeing unused decrypted memory: 2028K
Jan 22 07:49:11 np0005592158 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 07:49:11 np0005592158 kernel: Write protecting the kernel read-only data: 30720k
Jan 22 07:49:11 np0005592158 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 07:49:11 np0005592158 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 07:49:11 np0005592158 kernel: Run /init as init process
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: Manufacturer: QEMU
Jan 22 07:49:11 np0005592158 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 22 07:49:11 np0005592158 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:49:11 np0005592158 systemd: Detected virtualization kvm.
Jan 22 07:49:11 np0005592158 systemd: Detected architecture x86-64.
Jan 22 07:49:11 np0005592158 systemd: Running in initrd.
Jan 22 07:49:11 np0005592158 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 07:49:11 np0005592158 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 22 07:49:11 np0005592158 systemd: No hostname configured, using default hostname.
Jan 22 07:49:11 np0005592158 systemd: Hostname set to <localhost>.
Jan 22 07:49:11 np0005592158 systemd: Initializing machine ID from VM UUID.
Jan 22 07:49:11 np0005592158 systemd: Queued start job for default target Initrd Default Target.
Jan 22 07:49:11 np0005592158 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:11 np0005592158 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:49:11 np0005592158 systemd: Reached target Initrd /usr File System.
Jan 22 07:49:11 np0005592158 systemd: Reached target Local File Systems.
Jan 22 07:49:11 np0005592158 systemd: Reached target Path Units.
Jan 22 07:49:11 np0005592158 systemd: Reached target Slice Units.
Jan 22 07:49:11 np0005592158 systemd: Reached target Swaps.
Jan 22 07:49:11 np0005592158 systemd: Reached target Timer Units.
Jan 22 07:49:11 np0005592158 systemd: Listening on D-Bus System Message Bus Socket.
Jan 22 07:49:11 np0005592158 systemd: Listening on Journal Socket (/dev/log).
Jan 22 07:49:11 np0005592158 systemd: Listening on Journal Socket.
Jan 22 07:49:11 np0005592158 systemd: Listening on udev Control Socket.
Jan 22 07:49:11 np0005592158 systemd: Listening on udev Kernel Socket.
Jan 22 07:49:11 np0005592158 systemd: Reached target Socket Units.
Jan 22 07:49:11 np0005592158 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:49:11 np0005592158 systemd: Starting Journal Service...
Jan 22 07:49:11 np0005592158 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:49:11 np0005592158 systemd: Starting Apply Kernel Variables...
Jan 22 07:49:11 np0005592158 systemd: Starting Create System Users...
Jan 22 07:49:11 np0005592158 systemd: Starting Setup Virtual Console...
Jan 22 07:49:11 np0005592158 systemd: Finished Create List of Static Device Nodes.
Jan 22 07:49:11 np0005592158 systemd: Finished Apply Kernel Variables.
Jan 22 07:49:11 np0005592158 systemd: Finished Create System Users.
Jan 22 07:49:11 np0005592158 systemd-journald[307]: Journal started
Jan 22 07:49:11 np0005592158 systemd-journald[307]: Runtime Journal (/run/log/journal/2198fae51aa3494083f6677ed40734bb) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:11 np0005592158 systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 22 07:49:11 np0005592158 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 22 07:49:11 np0005592158 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 07:49:11 np0005592158 systemd: Started Journal Service.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 07:49:11 np0005592158 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:49:11 np0005592158 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:49:11 np0005592158 systemd[1]: Finished Setup Virtual Console.
Jan 22 07:49:11 np0005592158 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting dracut cmdline hook...
Jan 22 07:49:11 np0005592158 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:49:11 np0005592158 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 07:49:11 np0005592158 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592158 systemd[1]: Finished dracut cmdline hook.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting dracut pre-udev hook...
Jan 22 07:49:11 np0005592158 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 07:49:11 np0005592158 kernel: device-mapper: uevent: version 1.0.3
Jan 22 07:49:11 np0005592158 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 07:49:11 np0005592158 kernel: RPC: Registered named UNIX socket transport module.
Jan 22 07:49:11 np0005592158 kernel: RPC: Registered udp transport module.
Jan 22 07:49:11 np0005592158 kernel: RPC: Registered tcp transport module.
Jan 22 07:49:11 np0005592158 kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 07:49:11 np0005592158 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 07:49:11 np0005592158 rpc.statd[443]: Version 2.5.4 starting
Jan 22 07:49:11 np0005592158 rpc.statd[443]: Initializing NSM state
Jan 22 07:49:11 np0005592158 rpc.idmapd[448]: Setting log level to 0
Jan 22 07:49:11 np0005592158 systemd[1]: Finished dracut pre-udev hook.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:49:11 np0005592158 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:49:11 np0005592158 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting dracut pre-trigger hook...
Jan 22 07:49:11 np0005592158 systemd[1]: Finished dracut pre-trigger hook.
Jan 22 07:49:11 np0005592158 systemd[1]: Starting Coldplug All udev Devices...
Jan 22 07:49:11 np0005592158 systemd[1]: Created slice Slice /system/modprobe.
Jan 22 07:49:12 np0005592158 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:49:12 np0005592158 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:12 np0005592158 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:12 np0005592158 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 22 07:49:12 np0005592158 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 07:49:12 np0005592158 systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:49:12 np0005592158 kernel: scsi host0: ata_piix
Jan 22 07:49:12 np0005592158 kernel: scsi host1: ata_piix
Jan 22 07:49:12 np0005592158 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 22 07:49:12 np0005592158 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 22 07:49:12 np0005592158 kernel: vda: vda1
Jan 22 07:49:12 np0005592158 systemd[1]: Mounting Kernel Configuration File System...
Jan 22 07:49:12 np0005592158 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:49:12 np0005592158 systemd[1]: Mounted Kernel Configuration File System.
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target System Initialization.
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Basic System.
Jan 22 07:49:12 np0005592158 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Network.
Jan 22 07:49:12 np0005592158 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:49:12 np0005592158 systemd[1]: Starting dracut initqueue hook...
Jan 22 07:49:12 np0005592158 kernel: ata1: found unknown device (class 0)
Jan 22 07:49:12 np0005592158 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 07:49:12 np0005592158 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 07:49:12 np0005592158 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:49:12 np0005592158 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 07:49:12 np0005592158 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 07:49:12 np0005592158 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Initrd Root Device.
Jan 22 07:49:12 np0005592158 systemd[1]: Finished dracut initqueue hook.
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 07:49:12 np0005592158 systemd[1]: Reached target Remote File Systems.
Jan 22 07:49:12 np0005592158 systemd[1]: Starting dracut pre-mount hook...
Jan 22 07:49:12 np0005592158 systemd[1]: Finished dracut pre-mount hook.
Jan 22 07:49:12 np0005592158 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 07:49:12 np0005592158 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 07:49:12 np0005592158 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:49:12 np0005592158 systemd[1]: Mounting /sysroot...
Jan 22 07:49:13 np0005592158 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 07:49:13 np0005592158 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 07:49:13 np0005592158 kernel: XFS (vda1): Ending clean mount
Jan 22 07:49:13 np0005592158 systemd[1]: Mounted /sysroot.
Jan 22 07:49:13 np0005592158 systemd[1]: Reached target Initrd Root File System.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 07:49:13 np0005592158 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 07:49:13 np0005592158 systemd[1]: Reached target Initrd File Systems.
Jan 22 07:49:13 np0005592158 systemd[1]: Reached target Initrd Default Target.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting dracut mount hook...
Jan 22 07:49:13 np0005592158 systemd[1]: Finished dracut mount hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 07:49:13 np0005592158 rpc.idmapd[448]: exiting on signal 15
Jan 22 07:49:13 np0005592158 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Network.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Timer Units.
Jan 22 07:49:13 np0005592158 systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Initrd Default Target.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Basic System.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Initrd Root Device.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Initrd /usr File System.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Path Units.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Remote File Systems.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Slice Units.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Socket Units.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target System Initialization.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Local File Systems.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Swaps.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut mount hook.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut pre-mount hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut initqueue hook.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Setup Virtual Console.
Jan 22 07:49:13 np0005592158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Closed udev Control Socket.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Closed udev Kernel Socket.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut pre-udev hook.
Jan 22 07:49:13 np0005592158 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped dracut cmdline hook.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting Cleanup udev Database...
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 07:49:13 np0005592158 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 07:49:13 np0005592158 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Stopped Create System Users.
Jan 22 07:49:13 np0005592158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 07:49:13 np0005592158 systemd[1]: Finished Cleanup udev Database.
Jan 22 07:49:13 np0005592158 systemd[1]: Reached target Switch Root.
Jan 22 07:49:13 np0005592158 systemd[1]: Starting Switch Root...
Jan 22 07:49:13 np0005592158 systemd[1]: Switching root.
Jan 22 07:49:13 np0005592158 systemd-journald[307]: Journal stopped
Jan 22 07:49:14 np0005592158 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 22 07:49:14 np0005592158 kernel: audit: type=1404 audit(1769086153.477:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 07:49:14 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 07:49:14 np0005592158 kernel: audit: type=1403 audit(1769086153.635:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 07:49:14 np0005592158 systemd: Successfully loaded SELinux policy in 161.110ms.
Jan 22 07:49:14 np0005592158 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.027ms.
Jan 22 07:49:14 np0005592158 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:49:14 np0005592158 systemd: Detected virtualization kvm.
Jan 22 07:49:14 np0005592158 systemd: Detected architecture x86-64.
Jan 22 07:49:14 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 07:49:14 np0005592158 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd: Stopped Switch Root.
Jan 22 07:49:14 np0005592158 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 07:49:14 np0005592158 systemd: Created slice Slice /system/getty.
Jan 22 07:49:14 np0005592158 systemd: Created slice Slice /system/serial-getty.
Jan 22 07:49:14 np0005592158 systemd: Created slice Slice /system/sshd-keygen.
Jan 22 07:49:14 np0005592158 systemd: Created slice User and Session Slice.
Jan 22 07:49:14 np0005592158 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:14 np0005592158 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 22 07:49:14 np0005592158 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 07:49:14 np0005592158 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:49:14 np0005592158 systemd: Stopped target Switch Root.
Jan 22 07:49:14 np0005592158 systemd: Stopped target Initrd File Systems.
Jan 22 07:49:14 np0005592158 systemd: Stopped target Initrd Root File System.
Jan 22 07:49:14 np0005592158 systemd: Reached target Local Integrity Protected Volumes.
Jan 22 07:49:14 np0005592158 systemd: Reached target Path Units.
Jan 22 07:49:14 np0005592158 systemd: Reached target rpc_pipefs.target.
Jan 22 07:49:14 np0005592158 systemd: Reached target Slice Units.
Jan 22 07:49:14 np0005592158 systemd: Reached target Swaps.
Jan 22 07:49:14 np0005592158 systemd: Reached target Local Verity Protected Volumes.
Jan 22 07:49:14 np0005592158 systemd: Listening on RPCbind Server Activation Socket.
Jan 22 07:49:14 np0005592158 systemd: Reached target RPC Port Mapper.
Jan 22 07:49:14 np0005592158 systemd: Listening on Process Core Dump Socket.
Jan 22 07:49:14 np0005592158 systemd: Listening on initctl Compatibility Named Pipe.
Jan 22 07:49:14 np0005592158 systemd: Listening on udev Control Socket.
Jan 22 07:49:14 np0005592158 systemd: Listening on udev Kernel Socket.
Jan 22 07:49:14 np0005592158 systemd: Mounting Huge Pages File System...
Jan 22 07:49:14 np0005592158 systemd: Mounting POSIX Message Queue File System...
Jan 22 07:49:14 np0005592158 systemd: Mounting Kernel Debug File System...
Jan 22 07:49:14 np0005592158 systemd: Mounting Kernel Trace File System...
Jan 22 07:49:14 np0005592158 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:49:14 np0005592158 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:49:14 np0005592158 systemd: Starting Load Kernel Module configfs...
Jan 22 07:49:14 np0005592158 systemd: Starting Load Kernel Module drm...
Jan 22 07:49:14 np0005592158 systemd: Starting Load Kernel Module efi_pstore...
Jan 22 07:49:14 np0005592158 systemd: Starting Load Kernel Module fuse...
Jan 22 07:49:14 np0005592158 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 07:49:14 np0005592158 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd: Stopped File System Check on Root Device.
Jan 22 07:49:14 np0005592158 systemd: Stopped Journal Service.
Jan 22 07:49:14 np0005592158 systemd: Starting Journal Service...
Jan 22 07:49:14 np0005592158 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592158 systemd: Starting Generate network units from Kernel command line...
Jan 22 07:49:14 np0005592158 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:14 np0005592158 systemd: Starting Remount Root and Kernel File Systems...
Jan 22 07:49:14 np0005592158 kernel: fuse: init (API version 7.37)
Jan 22 07:49:14 np0005592158 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592158 systemd: Starting Apply Kernel Variables...
Jan 22 07:49:14 np0005592158 systemd: Starting Coldplug All udev Devices...
Jan 22 07:49:14 np0005592158 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 07:49:14 np0005592158 systemd: Mounted Huge Pages File System.
Jan 22 07:49:14 np0005592158 systemd: Mounted POSIX Message Queue File System.
Jan 22 07:49:14 np0005592158 systemd: Mounted Kernel Debug File System.
Jan 22 07:49:14 np0005592158 systemd: Mounted Kernel Trace File System.
Jan 22 07:49:14 np0005592158 systemd: Finished Create List of Static Device Nodes.
Jan 22 07:49:14 np0005592158 systemd-journald[682]: Journal started
Jan 22 07:49:14 np0005592158 systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:14 np0005592158 systemd[1]: Queued start job for default target Multi-User System.
Jan 22 07:49:14 np0005592158 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd: Started Journal Service.
Jan 22 07:49:14 np0005592158 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:14 np0005592158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 07:49:14 np0005592158 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Load Kernel Module fuse.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 07:49:14 np0005592158 kernel: ACPI: bus type drm_connector registered
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 07:49:14 np0005592158 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Load Kernel Module drm.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Apply Kernel Variables.
Jan 22 07:49:14 np0005592158 systemd[1]: Mounting FUSE Control File System...
Jan 22 07:49:14 np0005592158 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Rebuild Hardware Database...
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 07:49:14 np0005592158 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Create System Users...
Jan 22 07:49:14 np0005592158 systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:14 np0005592158 systemd-journald[682]: Received client request to flush runtime journal.
Jan 22 07:49:14 np0005592158 systemd[1]: Mounted FUSE Control File System.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 07:49:14 np0005592158 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Create System Users.
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:49:14 np0005592158 systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 07:49:14 np0005592158 systemd[1]: Reached target Local File Systems.
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 07:49:14 np0005592158 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 07:49:14 np0005592158 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592158 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 07:49:14 np0005592158 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:49:14 np0005592158 bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Security Auditing Service...
Jan 22 07:49:14 np0005592158 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 07:49:14 np0005592158 systemd[1]: Starting RPC Bind...
Jan 22 07:49:14 np0005592158 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 07:49:14 np0005592158 systemd[1]: Started RPC Bind.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 07:49:14 np0005592158 augenrules[710]: /sbin/augenrules: No change
Jan 22 07:49:14 np0005592158 augenrules[725]: No rules
Jan 22 07:49:14 np0005592158 augenrules[725]: enabled 1
Jan 22 07:49:14 np0005592158 augenrules[725]: failure 1
Jan 22 07:49:14 np0005592158 augenrules[725]: pid 704
Jan 22 07:49:14 np0005592158 augenrules[725]: rate_limit 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_limit 8192
Jan 22 07:49:14 np0005592158 augenrules[725]: lost 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog 3
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592158 augenrules[725]: enabled 1
Jan 22 07:49:14 np0005592158 augenrules[725]: failure 1
Jan 22 07:49:14 np0005592158 augenrules[725]: pid 704
Jan 22 07:49:14 np0005592158 augenrules[725]: rate_limit 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_limit 8192
Jan 22 07:49:14 np0005592158 augenrules[725]: lost 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog 2
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592158 augenrules[725]: enabled 1
Jan 22 07:49:14 np0005592158 augenrules[725]: failure 1
Jan 22 07:49:14 np0005592158 augenrules[725]: pid 704
Jan 22 07:49:14 np0005592158 augenrules[725]: rate_limit 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_limit 8192
Jan 22 07:49:14 np0005592158 augenrules[725]: lost 0
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog 2
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592158 augenrules[725]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592158 systemd[1]: Started Security Auditing Service.
Jan 22 07:49:14 np0005592158 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 07:49:14 np0005592158 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 07:49:15 np0005592158 systemd[1]: Finished Rebuild Hardware Database.
Jan 22 07:49:15 np0005592158 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:49:15 np0005592158 systemd[1]: Starting Update is Completed...
Jan 22 07:49:15 np0005592158 systemd[1]: Finished Update is Completed.
Jan 22 07:49:15 np0005592158 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:49:15 np0005592158 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target System Initialization.
Jan 22 07:49:15 np0005592158 systemd[1]: Started dnf makecache --timer.
Jan 22 07:49:15 np0005592158 systemd[1]: Started Daily rotation of log files.
Jan 22 07:49:15 np0005592158 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target Timer Units.
Jan 22 07:49:15 np0005592158 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 07:49:15 np0005592158 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target Socket Units.
Jan 22 07:49:15 np0005592158 systemd[1]: Starting D-Bus System Message Bus...
Jan 22 07:49:15 np0005592158 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:15 np0005592158 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:49:15 np0005592158 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 07:49:15 np0005592158 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:15 np0005592158 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:15 np0005592158 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:49:15 np0005592158 systemd[1]: Started D-Bus System Message Bus.
Jan 22 07:49:15 np0005592158 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target Basic System.
Jan 22 07:49:15 np0005592158 dbus-broker-lau[758]: Ready
Jan 22 07:49:15 np0005592158 systemd[1]: Starting NTP client/server...
Jan 22 07:49:15 np0005592158 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 07:49:15 np0005592158 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 07:49:15 np0005592158 systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 07:49:15 np0005592158 systemd[1]: Started irqbalance daemon.
Jan 22 07:49:15 np0005592158 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 07:49:15 np0005592158 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592158 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592158 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target sshd-keygen.target.
Jan 22 07:49:15 np0005592158 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 07:49:15 np0005592158 systemd[1]: Reached target User and Group Name Lookups.
Jan 22 07:49:15 np0005592158 systemd[1]: Starting User Login Management...
Jan 22 07:49:15 np0005592158 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 07:49:15 np0005592158 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 22 07:49:15 np0005592158 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 07:49:15 np0005592158 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 07:49:15 np0005592158 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 07:49:15 np0005592158 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 07:49:15 np0005592158 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 07:49:15 np0005592158 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 07:49:15 np0005592158 systemd-logind[787]: New seat seat0.
Jan 22 07:49:15 np0005592158 systemd[1]: Started User Login Management.
Jan 22 07:49:15 np0005592158 chronyd[807]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 07:49:15 np0005592158 chronyd[807]: Loaded 0 symmetric keys
Jan 22 07:49:15 np0005592158 chronyd[807]: Using right/UTC timezone to obtain leap second data
Jan 22 07:49:15 np0005592158 chronyd[807]: Loaded seccomp filter (level 2)
Jan 22 07:49:15 np0005592158 systemd[1]: Started NTP client/server.
Jan 22 07:49:15 np0005592158 kernel: kvm_amd: TSC scaling supported
Jan 22 07:49:15 np0005592158 kernel: kvm_amd: Nested Virtualization enabled
Jan 22 07:49:15 np0005592158 kernel: kvm_amd: Nested Paging enabled
Jan 22 07:49:15 np0005592158 kernel: kvm_amd: LBR virtualization supported
Jan 22 07:49:15 np0005592158 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 22 07:49:15 np0005592158 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 22 07:49:15 np0005592158 kernel: Console: switching to colour dummy device 80x25
Jan 22 07:49:15 np0005592158 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 07:49:15 np0005592158 kernel: [drm] features: -context_init
Jan 22 07:49:15 np0005592158 kernel: [drm] number of scanouts: 1
Jan 22 07:49:15 np0005592158 kernel: [drm] number of cap sets: 0
Jan 22 07:49:15 np0005592158 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 22 07:49:15 np0005592158 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 07:49:15 np0005592158 kernel: Console: switching to colour frame buffer device 128x48
Jan 22 07:49:15 np0005592158 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 07:49:15 np0005592158 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Jan 22 07:49:15 np0005592158 systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 07:49:16 np0005592158 cloud-init[842]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 12:49:16 +0000. Up 8.96 seconds.
Jan 22 07:49:16 np0005592158 systemd[1]: run-cloud\x2dinit-tmp-tmp3xfq58m1.mount: Deactivated successfully.
Jan 22 07:49:16 np0005592158 systemd[1]: Starting Hostname Service...
Jan 22 07:49:16 np0005592158 systemd[1]: Started Hostname Service.
Jan 22 07:49:16 np0005592158 systemd-hostnamed[856]: Hostname set to <np0005592158.novalocal> (static)
Jan 22 07:49:17 np0005592158 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 07:49:17 np0005592158 systemd[1]: Reached target Preparation for Network.
Jan 22 07:49:17 np0005592158 systemd[1]: Starting Network Manager...
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1710] NetworkManager (version 1.54.3-2.el9) is starting... (boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1715] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1789] manager[0x558df8b9f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1825] hostname: hostname: using hostnamed
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1826] hostname: static hostname changed from (none) to "np0005592158.novalocal"
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1830] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1959] manager[0x558df8b9f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.1960] manager[0x558df8b9f000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2005] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2006] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2007] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2008] manager: Networking is enabled by state file
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2010] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:49:17 np0005592158 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2020] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2041] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2054] dhcp: init: Using DHCP client 'internal'
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2057] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2072] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2080] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2090] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2101] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2105] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2134] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2139] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2142] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2144] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2146] device (eth0): carrier: link connected
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2150] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2158] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2164] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2168] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2169] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2172] manager: NetworkManager state is now CONNECTING
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2174] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2184] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2188] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2219] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2227] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2248] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:49:17 np0005592158 systemd[1]: Started Network Manager.
Jan 22 07:49:17 np0005592158 systemd[1]: Reached target Network.
Jan 22 07:49:17 np0005592158 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:49:17 np0005592158 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 07:49:17 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2484] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2487] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2488] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2491] device (lo): Activation: successful, device activated.
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2495] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2498] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2500] device (eth0): Activation: successful, device activated.
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2507] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:49:17 np0005592158 NetworkManager[860]: <info>  [1769086157.2510] manager: startup complete
Jan 22 07:49:17 np0005592158 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 07:49:17 np0005592158 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:49:17 np0005592158 systemd[1]: Reached target NFS client services.
Jan 22 07:49:17 np0005592158 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:49:17 np0005592158 systemd[1]: Reached target Remote File Systems.
Jan 22 07:49:17 np0005592158 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:17 np0005592158 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:49:17 np0005592158 systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 07:49:17 np0005592158 cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 12:49:17 +0000. Up 9.93 seconds.
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.119         | 255.255.255.0 | global | fa:16:3e:78:47:38 |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe78:4738/64 |       .       |  link  | fa:16:3e:78:47:38 |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 22 07:49:17 np0005592158 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:19 np0005592158 cloud-init[923]: Generating public/private rsa key pair.
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key fingerprint is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: SHA256:tLLf3sAR7bmWyT372+gP1nZboxLQL9EoKjKfZihVlQc root@np0005592158.novalocal
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key's randomart image is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: +---[RSA 3072]----+
Jan 22 07:49:19 np0005592158 cloud-init[923]: |       Eo        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |       o . .     |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |      . o o +    |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |     . . + * o   |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |    . . S + =    |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |   + . + . = * . |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |  . = +   o O =.=|
Jan 22 07:49:19 np0005592158 cloud-init[923]: | . . = . . = ..**|
Jan 22 07:49:19 np0005592158 cloud-init[923]: |  . o   ..o oo+==|
Jan 22 07:49:19 np0005592158 cloud-init[923]: +----[SHA256]-----+
Jan 22 07:49:19 np0005592158 cloud-init[923]: Generating public/private ecdsa key pair.
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key fingerprint is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: SHA256:bpFA6WEyFisKNlKIhWEOIxmhWQS1WLWXIaTPX+Qn+zM root@np0005592158.novalocal
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key's randomart image is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: +---[ECDSA 256]---+
Jan 22 07:49:19 np0005592158 cloud-init[923]: |X&Bo=.o.         |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |@B o+=+o         |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |*++.o=+..        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |+..+ ..+ .       |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |.   o   S .      |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |     . o =       |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |      . +        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |       . .E      |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |          .o     |
Jan 22 07:49:19 np0005592158 cloud-init[923]: +----[SHA256]-----+
Jan 22 07:49:19 np0005592158 cloud-init[923]: Generating public/private ed25519 key pair.
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 07:49:19 np0005592158 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key fingerprint is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: SHA256:kIVDHGW7cjeKUDtMzNO7kqs8AhPnLEgyrNcxjPv380o root@np0005592158.novalocal
Jan 22 07:49:19 np0005592158 cloud-init[923]: The key's randomart image is:
Jan 22 07:49:19 np0005592158 cloud-init[923]: +--[ED25519 256]--+
Jan 22 07:49:19 np0005592158 cloud-init[923]: |     oo++        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |     o+= .       |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |.  o  O.o        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |+oo ++ + o       |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |=* o.o= S o      |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |* = .. * + .     |
Jan 22 07:49:19 np0005592158 cloud-init[923]: | = .  + E        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |  ..o .+.        |
Jan 22 07:49:19 np0005592158 cloud-init[923]: |   .o+..o+.      |
Jan 22 07:49:19 np0005592158 cloud-init[923]: +----[SHA256]-----+
Jan 22 07:49:19 np0005592158 systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 07:49:19 np0005592158 systemd[1]: Reached target Cloud-config availability.
Jan 22 07:49:19 np0005592158 systemd[1]: Reached target Network is Online.
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Crash recovery kernel arming...
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 07:49:19 np0005592158 systemd[1]: Starting System Logging Service...
Jan 22 07:49:19 np0005592158 systemd[1]: Starting OpenSSH server daemon...
Jan 22 07:49:19 np0005592158 sm-notify[1006]: Version 2.5.4 starting
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Permit User Sessions...
Jan 22 07:49:19 np0005592158 systemd[1]: Started Notify NFS peers of a restart.
Jan 22 07:49:19 np0005592158 systemd[1]: Started OpenSSH server daemon.
Jan 22 07:49:19 np0005592158 systemd[1]: Finished Permit User Sessions.
Jan 22 07:49:19 np0005592158 systemd[1]: Started Command Scheduler.
Jan 22 07:49:19 np0005592158 systemd[1]: Started Getty on tty1.
Jan 22 07:49:19 np0005592158 systemd[1]: Started Serial Getty on ttyS0.
Jan 22 07:49:19 np0005592158 systemd[1]: Reached target Login Prompts.
Jan 22 07:49:19 np0005592158 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 22 07:49:19 np0005592158 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 07:49:19 np0005592158 systemd[1]: Started System Logging Service.
Jan 22 07:49:19 np0005592158 systemd[1]: Reached target Multi-User System.
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 07:49:19 np0005592158 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 07:49:19 np0005592158 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 07:49:19 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 07:49:19 np0005592158 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 22 07:49:19 np0005592158 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 07:49:19 np0005592158 cloud-init[1135]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 12:49:19 +0000. Up 11.86 seconds.
Jan 22 07:49:19 np0005592158 systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 07:49:19 np0005592158 systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 07:49:19 np0005592158 dracut[1283]: dracut-057-102.git20250818.el9
Jan 22 07:49:20 np0005592158 cloud-init[1304]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 12:49:20 +0000. Up 12.35 seconds.
Jan 22 07:49:20 np0005592158 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 07:49:20 np0005592158 cloud-init[1333]: #############################################################
Jan 22 07:49:20 np0005592158 cloud-init[1337]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 07:49:20 np0005592158 cloud-init[1346]: 256 SHA256:bpFA6WEyFisKNlKIhWEOIxmhWQS1WLWXIaTPX+Qn+zM root@np0005592158.novalocal (ECDSA)
Jan 22 07:49:20 np0005592158 cloud-init[1354]: 256 SHA256:kIVDHGW7cjeKUDtMzNO7kqs8AhPnLEgyrNcxjPv380o root@np0005592158.novalocal (ED25519)
Jan 22 07:49:20 np0005592158 cloud-init[1361]: 3072 SHA256:tLLf3sAR7bmWyT372+gP1nZboxLQL9EoKjKfZihVlQc root@np0005592158.novalocal (RSA)
Jan 22 07:49:20 np0005592158 cloud-init[1364]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 07:49:20 np0005592158 cloud-init[1366]: #############################################################
Jan 22 07:49:20 np0005592158 cloud-init[1304]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 12:49:20 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.55 seconds
Jan 22 07:49:20 np0005592158 systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 07:49:20 np0005592158 systemd[1]: Reached target Cloud-init target.
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:49:20 np0005592158 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: memstrack is not available
Jan 22 07:49:21 np0005592158 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:49:21 np0005592158 dracut[1287]: memstrack is not available
Jan 22 07:49:21 np0005592158 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:49:21 np0005592158 dracut[1287]: *** Including module: systemd ***
Jan 22 07:49:21 np0005592158 dracut[1287]: *** Including module: fips ***
Jan 22 07:49:22 np0005592158 chronyd[807]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Jan 22 07:49:22 np0005592158 chronyd[807]: System clock TAI offset set to 37 seconds
Jan 22 07:49:22 np0005592158 dracut[1287]: *** Including module: systemd-initrd ***
Jan 22 07:49:22 np0005592158 dracut[1287]: *** Including module: i18n ***
Jan 22 07:49:22 np0005592158 dracut[1287]: *** Including module: drm ***
Jan 22 07:49:22 np0005592158 dracut[1287]: *** Including module: prefixdevname ***
Jan 22 07:49:22 np0005592158 dracut[1287]: *** Including module: kernel-modules ***
Jan 22 07:49:23 np0005592158 kernel: block vda: the capability attribute has been deprecated.
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: qemu ***
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: fstab-sys ***
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: rootfs-block ***
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: terminfo ***
Jan 22 07:49:23 np0005592158 dracut[1287]: *** Including module: udev-rules ***
Jan 22 07:49:24 np0005592158 dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 22 07:49:24 np0005592158 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: virtiofs ***
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: dracut-systemd ***
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: usrmount ***
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: base ***
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: fs-lib ***
Jan 22 07:49:24 np0005592158 dracut[1287]: *** Including module: kdumpbase ***
Jan 22 07:49:25 np0005592158 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 07:49:25 np0005592158 dracut[1287]:  microcode_ctl module: mangling fw_dir
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 35 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 25 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 33 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 34 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 32 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 22 07:49:25 np0005592158 irqbalance[785]: IRQ 30 affinity is now unmanaged
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 07:49:25 np0005592158 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 07:49:25 np0005592158 dracut[1287]: *** Including module: openssl ***
Jan 22 07:49:25 np0005592158 dracut[1287]: *** Including module: shutdown ***
Jan 22 07:49:25 np0005592158 dracut[1287]: *** Including module: squash ***
Jan 22 07:49:26 np0005592158 dracut[1287]: *** Including modules done ***
Jan 22 07:49:26 np0005592158 dracut[1287]: *** Installing kernel module dependencies ***
Jan 22 07:49:26 np0005592158 dracut[1287]: *** Installing kernel module dependencies done ***
Jan 22 07:49:26 np0005592158 dracut[1287]: *** Resolving executable dependencies ***
Jan 22 07:49:27 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:49:28 np0005592158 dracut[1287]: *** Resolving executable dependencies done ***
Jan 22 07:49:28 np0005592158 dracut[1287]: *** Generating early-microcode cpio image ***
Jan 22 07:49:28 np0005592158 dracut[1287]: *** Store current command line parameters ***
Jan 22 07:49:28 np0005592158 dracut[1287]: Stored kernel commandline:
Jan 22 07:49:28 np0005592158 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 22 07:49:28 np0005592158 dracut[1287]: *** Install squash loader ***
Jan 22 07:49:29 np0005592158 dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 22 07:49:30 np0005592158 dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 22 07:49:30 np0005592158 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 07:49:30 np0005592158 dracut[1287]: *** Hardlinking files ***
Jan 22 07:49:30 np0005592158 dracut[1287]: *** Hardlinking files done ***
Jan 22 07:49:30 np0005592158 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 07:49:32 np0005592158 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 22 07:49:32 np0005592158 kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 22 07:49:32 np0005592158 systemd[1]: Finished Crash recovery kernel arming.
Jan 22 07:49:32 np0005592158 systemd[1]: Startup finished in 3.221s (kernel) + 2.593s (initrd) + 18.765s (userspace) = 24.580s.
Jan 22 07:49:47 np0005592158 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:50:30 np0005592158 systemd[1]: Created slice User Slice of UID 1000.
Jan 22 07:50:30 np0005592158 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 07:50:30 np0005592158 systemd-logind[787]: New session 1 of user zuul.
Jan 22 07:50:30 np0005592158 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 07:50:30 np0005592158 systemd[1]: Starting User Manager for UID 1000...
Jan 22 07:50:30 np0005592158 systemd[4310]: Queued start job for default target Main User Target.
Jan 22 07:50:30 np0005592158 systemd[4310]: Created slice User Application Slice.
Jan 22 07:50:30 np0005592158 systemd[4310]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 07:50:30 np0005592158 systemd[4310]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 07:50:30 np0005592158 systemd[4310]: Reached target Paths.
Jan 22 07:50:30 np0005592158 systemd[4310]: Reached target Timers.
Jan 22 07:50:30 np0005592158 systemd[4310]: Starting D-Bus User Message Bus Socket...
Jan 22 07:50:30 np0005592158 systemd[4310]: Starting Create User's Volatile Files and Directories...
Jan 22 07:50:30 np0005592158 systemd[4310]: Finished Create User's Volatile Files and Directories.
Jan 22 07:50:30 np0005592158 systemd[4310]: Listening on D-Bus User Message Bus Socket.
Jan 22 07:50:30 np0005592158 systemd[4310]: Reached target Sockets.
Jan 22 07:50:30 np0005592158 systemd[4310]: Reached target Basic System.
Jan 22 07:50:30 np0005592158 systemd[4310]: Reached target Main User Target.
Jan 22 07:50:30 np0005592158 systemd[4310]: Startup finished in 109ms.
Jan 22 07:50:30 np0005592158 systemd[1]: Started User Manager for UID 1000.
Jan 22 07:50:30 np0005592158 systemd[1]: Started Session 1 of User zuul.
Jan 22 07:50:31 np0005592158 python3[4392]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:34 np0005592158 python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:41 np0005592158 python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:42 np0005592158 python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 07:50:44 np0005592158 python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1DCoRB3r0Iy6aGg4LRzpWVb+uDCW+ivahM6mnwYTzs7NyJlgPrnZ6PV7GhjThi3qMi3wdL9+LpBaBPuOhI+k1w3f1FS+zKP3/xb59Ck+AhF8LIp3InS3sgWlvIGvXYvlwuN3aBMHp/hbvFOtbZFxgXhvIlVsk+m1K/J/50vtBBzyri7EjoTWDvY18FZoapjDeqss1t7AvCXVAcsVOfZsyssdWALG/AlGcmeZ9kZ/yza1tS0t7avldh0ZazNkLg/5jp3HQrTFLiETLQx8tBjdEj0Pme6UqjG17uVJkEVl4g3FLGiT4krCLRjW0sA3E3rd5e1m4tBIoSSqoqN2E+V9ctp/6T9Vpe3OcZdgKBUE9yz4tlHgQLxksFY2SiXEQYiWTctsRY30EsMJk2Qg65Fyp/ts6u4u66Uo27jNRB+ZD/vnAY4IKu94a2+6uIW/9oShh4f1cWrBlFzxXaUBj4KHar7HFljsOCavs7NCPccp7JoW8FoXONrfM+rhSgDbeDGE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:45 np0005592158 python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:45 np0005592158 python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:46 np0005592158 python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086245.4954143-252-122386673280691/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa follow=False checksum=9eec2026f94d681755d58aa430eaf5c6b319017b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:46 np0005592158 python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:47 np0005592158 python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086246.458629-307-211699865187514/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa.pub follow=False checksum=f8a39b98331ab3302b65dacd0b8176268aaf7e5b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:49 np0005592158 python3[4980]: ansible-ping Invoked with data=pong
Jan 22 07:50:50 np0005592158 python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:52 np0005592158 python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 07:50:53 np0005592158 python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592158 python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592158 python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592158 python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592158 python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592158 python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:57 np0005592158 python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:58 np0005592158 python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:58 np0005592158 python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086257.588877-32-83591313562921/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:59 np0005592158 python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592158 python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592158 python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592158 python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592158 python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592158 python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592158 python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592158 python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592158 python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592158 python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592158 python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592158 python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592158 python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592158 python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592158 python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592158 python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592158 python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592158 python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592158 python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592158 python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592158 python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592158 python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592158 python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592158 python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592158 python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592158 python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:08 np0005592158 python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 07:51:08 np0005592158 systemd[1]: Starting Time & Date Service...
Jan 22 07:51:09 np0005592158 systemd[1]: Started Time & Date Service.
Jan 22 07:51:09 np0005592158 systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 22 07:51:09 np0005592158 python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:09 np0005592158 python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:10 np0005592158 python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769086269.7049272-252-2810044716117/source _original_basename=tmp6jielptk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:11 np0005592158 python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:11 np0005592158 python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086271.0891168-303-140319692310831/source _original_basename=tmp2c52kqcr follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:12 np0005592158 python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:13 np0005592158 python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086272.4680164-382-277558674465974/source _original_basename=tmpsihnrey6 follow=False checksum=cb6c1a5f96f80c368134b306cfb8a4ce10f90c11 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:13 np0005592158 python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592158 python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592158 python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:15 np0005592158 python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086274.37694-452-135495289471916/source _original_basename=tmpe2h_oqjp follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:15 np0005592158 python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-37d2-1cc7-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:16 np0005592158 python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-37d2-1cc7-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 07:51:18 np0005592158 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:25 np0005592158 irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 22 07:51:25 np0005592158 irqbalance[785]: IRQ 27 affinity is now unmanaged
Jan 22 07:51:39 np0005592158 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 07:51:42 np0005592158 python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:52:42 np0005592158 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 22 07:52:53 np0005592158 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 22 07:52:53 np0005592158 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8464] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:52:53 np0005592158 systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8608] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8641] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8643] device (eth1): carrier: link connected
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8645] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8651] policy: auto-activating connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8654] device (eth1): Activation: starting connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8655] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8657] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8660] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:52:53 np0005592158 NetworkManager[860]: <info>  [1769086373.8665] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:52:53 np0005592158 systemd[4310]: Starting Mark boot as successful...
Jan 22 07:52:53 np0005592158 systemd[4310]: Finished Mark boot as successful.
Jan 22 07:52:55 np0005592158 systemd-logind[787]: New session 3 of user zuul.
Jan 22 07:52:55 np0005592158 systemd[1]: Started Session 3 of User zuul.
Jan 22 07:52:56 np0005592158 python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-97dc-dff7-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:53:06 np0005592158 python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:53:06 np0005592158 python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086385.7730064-155-104606407793157/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=66961519467b8831ba0c243060d8ab522bdd948e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:53:06 np0005592158 python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 07:53:06 np0005592158 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 07:53:06 np0005592158 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 07:53:06 np0005592158 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 07:53:06 np0005592158 systemd[1]: Stopping Network Manager...
Jan 22 07:53:06 np0005592158 NetworkManager[860]: <info>  [1769086386.9985] caught SIGTERM, shutting down normally.
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0004] dhcp4 (eth0): canceled DHCP transaction
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0004] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0005] dhcp4 (eth0): state changed no lease
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0009] manager: NetworkManager state is now CONNECTING
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0107] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0107] dhcp4 (eth1): state changed no lease
Jan 22 07:53:07 np0005592158 NetworkManager[860]: <info>  [1769086387.0177] exiting (success)
Jan 22 07:53:07 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:53:07 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:53:07 np0005592158 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 07:53:07 np0005592158 systemd[1]: Stopped Network Manager.
Jan 22 07:53:07 np0005592158 systemd[1]: NetworkManager.service: Consumed 1.626s CPU time, 10.5M memory peak.
Jan 22 07:53:07 np0005592158 systemd[1]: Starting Network Manager...
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.0726] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.0728] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.0780] manager[0x558895620000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:53:07 np0005592158 systemd[1]: Starting Hostname Service...
Jan 22 07:53:07 np0005592158 systemd[1]: Started Hostname Service.
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1535] hostname: hostname: using hostnamed
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1536] hostname: static hostname changed from (none) to "np0005592158.novalocal"
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1542] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1547] manager[0x558895620000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1548] manager[0x558895620000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1584] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1585] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1585] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1587] manager: Networking is enabled by state file
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1589] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1595] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1627] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1641] dhcp: init: Using DHCP client 'internal'
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1648] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1656] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1664] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1676] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1687] device (eth0): carrier: link connected
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1693] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1701] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1702] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1710] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1721] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1730] device (eth1): carrier: link connected
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1735] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1743] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571) (indicated)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1743] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1750] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1760] device (eth1): Activation: starting connection 'Wired connection 1' (22966868-29c6-340d-be5e-bba5c29bb571)
Jan 22 07:53:07 np0005592158 systemd[1]: Started Network Manager.
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1769] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1776] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1780] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1792] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1795] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1799] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1802] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1804] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1807] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1813] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1818] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1828] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1830] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1846] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1850] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1854] device (lo): Activation: successful, device activated.
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1888] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1894] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:53:07 np0005592158 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1948] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1963] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1964] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1967] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1969] device (eth0): Activation: successful, device activated.
Jan 22 07:53:07 np0005592158 NetworkManager[7197]: <info>  [1769086387.1973] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:53:07 np0005592158 python3[7271]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-97dc-dff7-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:53:17 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:53:37 np0005592158 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.6613] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:53:52 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:53:52 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.6998] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7003] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7019] device (eth1): Activation: successful, device activated.
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7029] manager: startup complete
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7035] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <warn>  [1769086432.7062] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7075] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 07:53:52 np0005592158 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7227] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7229] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7229] dhcp4 (eth1): state changed no lease
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7251] policy: auto-activating connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7258] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7259] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7263] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7272] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:53:52 np0005592158 NetworkManager[7197]: <info>  [1769086432.7286] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:53:53 np0005592158 NetworkManager[7197]: <info>  [1769086433.3971] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:53:53 np0005592158 NetworkManager[7197]: <info>  [1769086433.3981] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:53:53 np0005592158 NetworkManager[7197]: <info>  [1769086433.3987] device (eth1): Activation: successful, device activated.
Jan 22 07:54:03 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:54:07 np0005592158 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Jan 22 07:54:07 np0005592158 systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 07:54:07 np0005592158 systemd[1]: session-3.scope: Consumed 1.573s CPU time.
Jan 22 07:54:07 np0005592158 systemd-logind[787]: Removed session 3.
Jan 22 07:54:58 np0005592158 systemd-logind[787]: New session 4 of user zuul.
Jan 22 07:54:58 np0005592158 systemd[1]: Started Session 4 of User zuul.
Jan 22 07:54:59 np0005592158 python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:54:59 np0005592158 python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086498.9099114-373-212722702516492/source _original_basename=tmpo0m84ckm follow=False checksum=5e7e0974f47bfd675c68ead6f6109233c4c9d481 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:55:02 np0005592158 systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 07:55:02 np0005592158 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Jan 22 07:55:02 np0005592158 systemd-logind[787]: Removed session 4.
Jan 22 07:56:29 np0005592158 systemd[4310]: Created slice User Background Tasks Slice.
Jan 22 07:56:30 np0005592158 systemd[4310]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 07:56:30 np0005592158 systemd[4310]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:00:10 np0005592158 systemd-logind[787]: New session 5 of user zuul.
Jan 22 08:00:10 np0005592158 systemd[1]: Started Session 5 of User zuul.
Jan 22 08:00:10 np0005592158 python3[7513]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca0-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:11 np0005592158 python3[7542]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592158 python3[7568]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592158 python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592158 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592158 python3[7646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:13 np0005592158 python3[7724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:00:13 np0005592158 python3[7797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086813.2072291-363-280776224384526/source _original_basename=tmpxbtn2uca follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:14 np0005592158 python3[7847]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:00:15 np0005592158 systemd[1]: Reloading.
Jan 22 08:00:15 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:00:16 np0005592158 python3[7903]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 08:00:18 np0005592158 python3[7929]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592158 python3[7957]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592158 python3[7985]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592158 python3[8013]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592158 python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca7-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592158 python3[8070]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:00:22 np0005592158 systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 08:00:22 np0005592158 systemd[1]: session-5.scope: Consumed 4.334s CPU time.
Jan 22 08:00:22 np0005592158 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Jan 22 08:00:22 np0005592158 systemd-logind[787]: Removed session 5.
Jan 22 08:00:24 np0005592158 systemd-logind[787]: New session 6 of user zuul.
Jan 22 08:00:24 np0005592158 systemd[1]: Started Session 6 of User zuul.
Jan 22 08:00:25 np0005592158 python3[8103]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:00:31 np0005592158 setsebool[8141]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 08:00:31 np0005592158 setsebool[8141]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 08:00:45 np0005592158 kernel: SELinux:  Converting 385 SID table entries...
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:00:45 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  Converting 388 SID table entries...
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:00:55 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:01:14 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:14 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:01:14 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:01:14 np0005592158 systemd[1]: Reloading.
Jan 22 08:01:14 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:01:15 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:01:25 np0005592158 python3[15124]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-af35-cd98-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:01:26 np0005592158 kernel: evm: overlay not supported
Jan 22 08:01:26 np0005592158 systemd[4310]: Starting D-Bus User Message Bus...
Jan 22 08:01:26 np0005592158 dbus-broker-launch[15772]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592158 dbus-broker-launch[15772]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592158 systemd[4310]: Started D-Bus User Message Bus.
Jan 22 08:01:26 np0005592158 dbus-broker-lau[15772]: Ready
Jan 22 08:01:26 np0005592158 systemd[4310]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:26 np0005592158 systemd[4310]: Created slice Slice /user.
Jan 22 08:01:26 np0005592158 systemd[4310]: podman-15652.scope: unit configures an IP firewall, but not running as root.
Jan 22 08:01:26 np0005592158 systemd[4310]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 08:01:26 np0005592158 systemd[4310]: Started podman-15652.scope.
Jan 22 08:01:26 np0005592158 systemd[4310]: Started podman-pause-64e70ee2.scope.
Jan 22 08:01:27 np0005592158 python3[16125]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:01:27 np0005592158 python3[16125]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 22 08:01:28 np0005592158 systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 08:01:28 np0005592158 systemd[1]: session-6.scope: Consumed 48.989s CPU time.
Jan 22 08:01:28 np0005592158 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Jan 22 08:01:28 np0005592158 systemd-logind[787]: Removed session 6.
Jan 22 08:01:54 np0005592158 systemd-logind[787]: New session 7 of user zuul.
Jan 22 08:01:54 np0005592158 systemd[1]: Started Session 7 of User zuul.
Jan 22 08:01:54 np0005592158 python3[24921]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:55 np0005592158 python3[25127]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:55 np0005592158 python3[25501]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005592158.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 08:01:59 np0005592158 python3[26954]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:02:00 np0005592158 python3[27269]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:02:01 np0005592158 python3[27498]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086920.3643124-168-2838513463597/source _original_basename=tmpfe18g3ex follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:02:02 np0005592158 python3[27838]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Jan 22 08:02:02 np0005592158 systemd[1]: Starting Hostname Service...
Jan 22 08:02:02 np0005592158 systemd[1]: Started Hostname Service.
Jan 22 08:02:02 np0005592158 systemd-hostnamed[27942]: Changed pretty hostname to 'compute-1'
Jan 22 08:02:02 np0005592158 systemd-hostnamed[27942]: Hostname set to <compute-1> (static)
Jan 22 08:02:02 np0005592158 NetworkManager[7197]: <info>  [1769086922.3286] hostname: static hostname changed from "np0005592158.novalocal" to "compute-1"
Jan 22 08:02:02 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:02:02 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:02:02 np0005592158 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 08:02:02 np0005592158 systemd[1]: session-7.scope: Consumed 2.395s CPU time.
Jan 22 08:02:02 np0005592158 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Jan 22 08:02:02 np0005592158 systemd-logind[787]: Removed session 7.
Jan 22 08:02:11 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:02:11 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:02:11 np0005592158 systemd[1]: man-db-cache-update.service: Consumed 59.950s CPU time.
Jan 22 08:02:11 np0005592158 systemd[1]: run-r1c0f0d83d91a4d6f8507d7ad1a74983e.service: Deactivated successfully.
Jan 22 08:02:12 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:02:32 np0005592158 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:04:10 np0005592158 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 08:04:10 np0005592158 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 08:04:10 np0005592158 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 08:04:10 np0005592158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 08:07:01 np0005592158 systemd-logind[787]: New session 8 of user zuul.
Jan 22 08:07:01 np0005592158 systemd[1]: Started Session 8 of User zuul.
Jan 22 08:07:01 np0005592158 python3[30011]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:07:03 np0005592158 python3[30127]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:03 np0005592158 python3[30200]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:04 np0005592158 python3[30226]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:04 np0005592158 python3[30299]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:04 np0005592158 python3[30325]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:05 np0005592158 python3[30398]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:05 np0005592158 python3[30424]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:05 np0005592158 python3[30497]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:06 np0005592158 python3[30523]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:06 np0005592158 python3[30596]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:06 np0005592158 python3[30622]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:07 np0005592158 python3[30695]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:07 np0005592158 python3[30721]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:07 np0005592158 python3[30794]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1712844-34124-22115056938444/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:19 np0005592158 python3[30842]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:12:19 np0005592158 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 08:12:19 np0005592158 systemd[1]: session-8.scope: Consumed 5.585s CPU time.
Jan 22 08:12:19 np0005592158 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Jan 22 08:12:19 np0005592158 systemd-logind[787]: Removed session 8.
Jan 22 08:21:58 np0005592158 systemd-logind[787]: New session 9 of user zuul.
Jan 22 08:21:58 np0005592158 systemd[1]: Started Session 9 of User zuul.
Jan 22 08:21:59 np0005592158 python3.9[31008]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:00 np0005592158 python3.9[31190]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:11 np0005592158 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 08:22:11 np0005592158 systemd[1]: session-9.scope: Consumed 8.471s CPU time.
Jan 22 08:22:11 np0005592158 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Jan 22 08:22:11 np0005592158 systemd-logind[787]: Removed session 9.
Jan 22 08:22:26 np0005592158 systemd-logind[787]: New session 10 of user zuul.
Jan 22 08:22:26 np0005592158 systemd[1]: Started Session 10 of User zuul.
Jan 22 08:22:27 np0005592158 python3.9[31400]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:22:29 np0005592158 python3.9[31574]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:30 np0005592158 python3.9[31726]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:31 np0005592158 python3.9[31879]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:22:32 np0005592158 python3.9[32031]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:32 np0005592158 python3.9[32183]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:22:33 np0005592158 python3.9[32306]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088152.4621327-178-18361954212508/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:34 np0005592158 python3.9[32458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:35 np0005592158 python3.9[32614]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:36 np0005592158 python3.9[32766]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:37 np0005592158 python3.9[32916]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:22:42 np0005592158 python3.9[33169]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:43 np0005592158 python3.9[33319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:44 np0005592158 python3.9[33473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:45 np0005592158 python3.9[33632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:22:46 np0005592158 python3.9[33716]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:23:35 np0005592158 systemd[1]: Reloading.
Jan 22 08:23:35 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:35 np0005592158 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 08:23:37 np0005592158 systemd[1]: Reloading.
Jan 22 08:23:37 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:37 np0005592158 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 08:23:37 np0005592158 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 08:23:37 np0005592158 systemd[1]: Reloading.
Jan 22 08:23:37 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:37 np0005592158 systemd[1]: Starting dnf makecache...
Jan 22 08:23:37 np0005592158 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 08:23:38 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:23:38 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:23:38 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:23:38 np0005592158 dnf[34006]: Failed determining last makecache time.
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-barbican-42b4c41831408a8e323 132 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 190 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-cinder-1c00d6490d88e436f26ef 160 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-stevedore-c4acc5639fd2329372142 175 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-cloudkitty-tests-tempest-2c80f8 192 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-os-refresh-config-9bfc52b5049be2d8de61 162 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 169 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-designate-tests-tempest-347fdbc 171 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-glance-1fd12c29b339f30fe823e 178 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 173 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-manila-3c01b7181572c95dac462 154 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-whitebox-neutron-tests-tempest- 152 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-octavia-ba397f07a7331190208c 175 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-watcher-c014f81a8647287f6dcc 165 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-ansible-config_template-5ccaa22121a7ff 158 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 158 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-swift-dc98a8463506ac520c469a 154 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-python-tempestconf-8515371b7cceebd4282 172 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: delorean-openstack-heat-ui-013accbfd179753bc3f0 135 kB/s | 3.0 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: CentOS Stream 9 - BaseOS                         72 kB/s | 6.7 kB     00:00
Jan 22 08:23:38 np0005592158 dnf[34006]: CentOS Stream 9 - AppStream                      61 kB/s | 6.8 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: CentOS Stream 9 - CRB                            28 kB/s | 6.6 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: CentOS Stream 9 - Extras packages                33 kB/s | 7.3 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: dlrn-antelope-testing                           171 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: dlrn-antelope-build-deps                        185 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: centos9-rabbitmq                                137 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: centos9-storage                                 135 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: centos9-opstools                                130 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: NFV SIG OpenvSwitch                             137 kB/s | 3.0 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: repo-setup-centos-appstream                     161 kB/s | 4.4 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: repo-setup-centos-baseos                        154 kB/s | 3.9 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: repo-setup-centos-highavailability              166 kB/s | 3.9 kB     00:00
Jan 22 08:23:39 np0005592158 dnf[34006]: repo-setup-centos-powertools                    193 kB/s | 4.3 kB     00:00
Jan 22 08:23:40 np0005592158 dnf[34006]: Extra Packages for Enterprise Linux 9 - x86_64   27 kB/s |  25 kB     00:00
Jan 22 08:23:41 np0005592158 dnf[34006]: Metadata cache created.
Jan 22 08:23:41 np0005592158 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 08:23:41 np0005592158 systemd[1]: Finished dnf makecache.
Jan 22 08:23:41 np0005592158 systemd[1]: dnf-makecache.service: Consumed 1.985s CPU time.
Jan 22 08:24:45 np0005592158 kernel: SELinux:  Converting 2725 SID table entries...
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:24:45 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:24:45 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 08:24:46 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:24:46 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:24:46 np0005592158 systemd[1]: Reloading.
Jan 22 08:24:46 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:24:46 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:24:47 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:24:47 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:24:47 np0005592158 systemd[1]: man-db-cache-update.service: Consumed 1.316s CPU time.
Jan 22 08:24:47 np0005592158 systemd[1]: run-r201b7a5edb474e1fb1173c958de17902.service: Deactivated successfully.
Jan 22 08:24:57 np0005592158 python3.9[35300]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:24:59 np0005592158 python3.9[35581]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:25:00 np0005592158 python3.9[35733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:25:05 np0005592158 python3.9[35887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:08 np0005592158 python3.9[36039]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:25:11 np0005592158 python3.9[36191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:12 np0005592158 python3.9[36343]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:13 np0005592158 python3.9[36466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088311.9991283-668-128024124059447/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:14 np0005592158 python3.9[36618]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:15 np0005592158 python3.9[36770]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:16 np0005592158 python3.9[36923]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:17 np0005592158 python3.9[37075]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:25:17 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:25:17 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:25:18 np0005592158 python3.9[37229]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:20 np0005592158 python3.9[37387]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:25:21 np0005592158 python3.9[37547]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:25:21 np0005592158 python3.9[37700]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:22 np0005592158 python3.9[37858]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:25:23 np0005592158 python3.9[38010]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:29 np0005592158 python3.9[38165]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:30 np0005592158 python3.9[38317]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:30 np0005592158 python3.9[38440]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088329.704591-1024-74408571587814/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:32 np0005592158 python3.9[38592]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:32 np0005592158 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:25:32 np0005592158 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 08:25:32 np0005592158 kernel: Bridge firewalling registered
Jan 22 08:25:32 np0005592158 systemd-modules-load[38596]: Inserted module 'br_netfilter'
Jan 22 08:25:32 np0005592158 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:25:32 np0005592158 python3.9[38751]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:33 np0005592158 python3.9[38874]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088332.4975896-1094-131165375167901/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:34 np0005592158 python3.9[39026]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:38 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:25:38 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:25:39 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:25:39 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:25:39 np0005592158 systemd[1]: Reloading.
Jan 22 08:25:39 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:39 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:25:40 np0005592158 python3.9[40393]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:41 np0005592158 python3.9[41369]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:25:42 np0005592158 python3.9[42189]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:43 np0005592158 python3.9[43154]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:43 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:25:43 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:25:43 np0005592158 systemd[1]: man-db-cache-update.service: Consumed 5.176s CPU time.
Jan 22 08:25:43 np0005592158 systemd[1]: run-rdb45f3af7e234bd88beeec9e29a23930.service: Deactivated successfully.
Jan 22 08:25:43 np0005592158 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:43 np0005592158 systemd[1]: Starting Authorization Manager...
Jan 22 08:25:43 np0005592158 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:44 np0005592158 polkitd[43403]: Started polkitd version 0.117
Jan 22 08:25:44 np0005592158 systemd[1]: Started Authorization Manager.
Jan 22 08:25:45 np0005592158 python3.9[43573]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:45 np0005592158 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592158 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:25:45 np0005592158 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:25:45 np0005592158 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592158 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:46 np0005592158 python3.9[43735]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:25:50 np0005592158 python3.9[43887]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:50 np0005592158 systemd[1]: Reloading.
Jan 22 08:25:50 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:51 np0005592158 python3.9[44076]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:51 np0005592158 systemd[1]: Reloading.
Jan 22 08:25:51 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:53 np0005592158 python3.9[44265]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:53 np0005592158 python3.9[44418]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:53 np0005592158 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 08:25:54 np0005592158 python3.9[44571]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:55 np0005592158 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 22 08:25:55 np0005592158 irqbalance[785]: IRQ 26 affinity is now unmanaged
Jan 22 08:25:56 np0005592158 python3.9[44733]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:57 np0005592158 python3.9[44886]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:57 np0005592158 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 08:25:57 np0005592158 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 08:25:57 np0005592158 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 08:25:57 np0005592158 systemd[1]: Starting Apply Kernel Variables...
Jan 22 08:25:57 np0005592158 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 08:25:57 np0005592158 systemd[1]: Finished Apply Kernel Variables.
Jan 22 08:25:58 np0005592158 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 08:25:58 np0005592158 systemd[1]: session-10.scope: Consumed 2min 28.885s CPU time.
Jan 22 08:25:58 np0005592158 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Jan 22 08:25:58 np0005592158 systemd-logind[787]: Removed session 10.
Jan 22 08:26:03 np0005592158 systemd-logind[787]: New session 11 of user zuul.
Jan 22 08:26:03 np0005592158 systemd[1]: Started Session 11 of User zuul.
Jan 22 08:26:04 np0005592158 python3.9[45070]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:06 np0005592158 python3.9[45226]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:26:07 np0005592158 python3.9[45379]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:26:08 np0005592158 python3.9[45537]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:26:09 np0005592158 python3.9[45697]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:26:10 np0005592158 python3.9[45781]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:26:14 np0005592158 python3.9[45944]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:29 np0005592158 kernel: SELinux:  Converting 2737 SID table entries...
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:29 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:30 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 08:26:30 np0005592158 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 08:26:31 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:31 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:31 np0005592158 systemd[1]: Reloading.
Jan 22 08:26:31 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:31 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:31 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:32 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:32 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:32 np0005592158 systemd[1]: run-r10f21485168d40538f29106072465696.service: Deactivated successfully.
Jan 22 08:26:33 np0005592158 python3.9[47043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:26:33 np0005592158 systemd[1]: Reloading.
Jan 22 08:26:33 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:33 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:33 np0005592158 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 08:26:33 np0005592158 chown[47086]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 08:26:34 np0005592158 ovs-ctl[47091]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 08:26:34 np0005592158 ovs-ctl[47091]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 08:26:34 np0005592158 ovs-ctl[47091]: Starting ovsdb-server [  OK  ]
Jan 22 08:26:34 np0005592158 ovs-vsctl[47140]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 08:26:34 np0005592158 ovs-vsctl[47160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c803af81-5cf0-46ac-8f46-401e876a838c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 08:26:34 np0005592158 ovs-ctl[47091]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 08:26:34 np0005592158 ovs-vsctl[47166]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 22 08:26:34 np0005592158 ovs-ctl[47091]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:34 np0005592158 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 08:26:34 np0005592158 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 08:26:34 np0005592158 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 08:26:34 np0005592158 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 08:26:34 np0005592158 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 08:26:34 np0005592158 ovs-ctl[47210]: Inserting openvswitch module [  OK  ]
Jan 22 08:26:34 np0005592158 ovs-ctl[47179]: Starting ovs-vswitchd [  OK  ]
Jan 22 08:26:34 np0005592158 ovs-vsctl[47230]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 22 08:26:34 np0005592158 ovs-ctl[47179]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:34 np0005592158 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 08:26:34 np0005592158 systemd[1]: Starting Open vSwitch...
Jan 22 08:26:34 np0005592158 systemd[1]: Finished Open vSwitch.
Jan 22 08:26:36 np0005592158 python3.9[47382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:37 np0005592158 python3.9[47534]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:26:39 np0005592158 kernel: SELinux:  Converting 2751 SID table entries...
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:39 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:40 np0005592158 python3.9[47689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:41 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 08:26:41 np0005592158 python3.9[47847]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:44 np0005592158 python3.9[48000]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:26:45 np0005592158 python3.9[48287]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:26:46 np0005592158 python3.9[48437]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:26:47 np0005592158 python3.9[48591]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:50 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:50 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:50 np0005592158 systemd[1]: Reloading.
Jan 22 08:26:50 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:50 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:50 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:51 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:51 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:51 np0005592158 systemd[1]: run-r474fc28a3a63429a994610028b3c1011.service: Deactivated successfully.
Jan 22 08:26:52 np0005592158 python3.9[48908]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:26:52 np0005592158 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 08:26:52 np0005592158 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 08:26:52 np0005592158 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 08:26:52 np0005592158 systemd[1]: Stopping Network Manager...
Jan 22 08:26:52 np0005592158 NetworkManager[7197]: <info>  [1769088412.5976] caught SIGTERM, shutting down normally.
Jan 22 08:26:52 np0005592158 NetworkManager[7197]: <info>  [1769088412.6003] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:26:52 np0005592158 NetworkManager[7197]: <info>  [1769088412.6004] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:52 np0005592158 NetworkManager[7197]: <info>  [1769088412.6004] dhcp4 (eth0): state changed no lease
Jan 22 08:26:52 np0005592158 NetworkManager[7197]: <info>  [1769088412.6007] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:52 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:26:52 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:26:53 np0005592158 NetworkManager[7197]: <info>  [1769088413.2344] exiting (success)
Jan 22 08:26:53 np0005592158 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 08:26:53 np0005592158 systemd[1]: Stopped Network Manager.
Jan 22 08:26:53 np0005592158 systemd[1]: NetworkManager.service: Consumed 14.762s CPU time, 4.1M memory peak, read 0B from disk, written 20.0K to disk.
Jan 22 08:26:53 np0005592158 systemd[1]: Starting Network Manager...
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3019] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d923d6f4-79ae-48f6-b1f3-cf5ec2bceff3)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3023] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3094] manager[0x557f129f7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 08:26:53 np0005592158 systemd[1]: Starting Hostname Service...
Jan 22 08:26:53 np0005592158 systemd[1]: Started Hostname Service.
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3916] hostname: hostname: using hostnamed
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3917] hostname: static hostname changed from (none) to "compute-1"
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3928] manager[0x557f129f7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3928] manager[0x557f129f7000]: rfkill: WWAN hardware radio set enabled
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3949] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3958] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3959] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3959] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3960] manager: Networking is enabled by state file
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3962] settings: Loaded settings plugin: keyfile (internal)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3967] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3989] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.3998] dhcp: init: Using DHCP client 'internal'
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4000] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4006] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4010] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4018] device (lo): Activation: starting connection 'lo' (85925d65-d6c4-4300-b142-abef792fcfc1)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4025] device (eth0): carrier: link connected
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4031] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4035] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4036] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4040] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4046] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4053] device (eth1): carrier: link connected
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4057] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4062] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f) (indicated)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4062] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4066] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4072] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4080] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 08:26:53 np0005592158 systemd[1]: Started Network Manager.
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4088] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4090] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4093] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4108] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4121] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4125] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4127] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4131] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4138] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4143] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4151] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4163] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4171] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4173] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4177] device (lo): Activation: successful, device activated.
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4188] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4193] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4271] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4275] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4277] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4279] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4281] device (eth1): Activation: successful, device activated.
Jan 22 08:26:53 np0005592158 systemd[1]: Starting Network Manager Wait Online...
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4328] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4331] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4336] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4338] device (eth0): Activation: successful, device activated.
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4345] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 08:26:53 np0005592158 NetworkManager[48926]: <info>  [1769088413.4348] manager: startup complete
Jan 22 08:26:53 np0005592158 systemd[1]: Finished Network Manager Wait Online.
Jan 22 08:26:54 np0005592158 python3.9[49134]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:03 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:06 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:27:06 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:27:06 np0005592158 systemd[1]: Reloading.
Jan 22 08:27:06 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:27:06 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:27:06 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:27:07 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:27:07 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:27:07 np0005592158 systemd[1]: run-r4e5152777b5c49e69c3010147c72545b.service: Deactivated successfully.
Jan 22 08:27:08 np0005592158 python3.9[49594]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:27:09 np0005592158 python3.9[49746]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:10 np0005592158 python3.9[49900]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:11 np0005592158 python3.9[50052]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592158 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592158 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:13 np0005592158 python3.9[50508]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:14 np0005592158 python3.9[50631]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088433.2044976-648-177946868683446/.source _original_basename=.zjt8zoce follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:15 np0005592158 python3.9[50783]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:16 np0005592158 python3.9[50935]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 08:27:17 np0005592158 python3.9[51087]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:20 np0005592158 python3.9[51514]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 08:27:21 np0005592158 ansible-async_wrapper.py[51689]: Invoked with j277768451889 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5615606-846-197457388192672/AnsiballZ_edpm_os_net_config.py _
Jan 22 08:27:21 np0005592158 ansible-async_wrapper.py[51692]: Starting module and watcher
Jan 22 08:27:21 np0005592158 ansible-async_wrapper.py[51692]: Start watching 51693 (300)
Jan 22 08:27:21 np0005592158 ansible-async_wrapper.py[51693]: Start module (51693)
Jan 22 08:27:21 np0005592158 ansible-async_wrapper.py[51689]: Return async_wrapper task started.
Jan 22 08:27:22 np0005592158 python3.9[51694]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 08:27:22 np0005592158 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 08:27:22 np0005592158 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 08:27:22 np0005592158 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 08:27:22 np0005592158 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 08:27:22 np0005592158 kernel: cfg80211: failed to load regulatory.db
Jan 22 08:27:23 np0005592158 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9098] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9112] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9575] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9578] audit: op="connection-add" uuid="f3a1b4c8-6898-43c8-a145-cff6493db8d5" name="br-ex-br" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9602] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9604] audit: op="connection-add" uuid="89645a98-362a-4a90-ad96-b42765a6e74e" name="br-ex-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9619] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9621] audit: op="connection-add" uuid="9803fa93-e62a-4987-9a66-8739bb27254a" name="eth1-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9632] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9633] audit: op="connection-add" uuid="784088bf-7c7d-46bf-a830-24509eb2750b" name="vlan20-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9643] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9645] audit: op="connection-add" uuid="56cde7b4-6262-426f-956e-5f1c36f70304" name="vlan21-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9655] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9656] audit: op="connection-add" uuid="ca2d6919-0e35-4f6c-b239-db4784ee9143" name="vlan22-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9665] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9667] audit: op="connection-add" uuid="1521fdf4-2e0c-41c9-bc78-9fc63a3e68f3" name="vlan23-port" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9690] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9705] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9707] audit: op="connection-add" uuid="ed8ab3e7-d1ec-48db-ab69-d8b86554973c" name="br-ex-if" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9758] audit: op="connection-update" uuid="ca5780bd-10f2-5d02-a1d0-e241b484666f" name="ci-private-network" args="ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.routing-rules,connection.master,connection.port-type,connection.controller,connection.timestamp,connection.slave-type,ipv6.addresses,ipv6.dns,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9774] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9776] audit: op="connection-add" uuid="3899566f-b038-40e1-8d3a-797a9203ea2d" name="vlan20-if" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9790] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9791] audit: op="connection-add" uuid="28614ea6-7886-4bf9-9d35-18dd738908ba" name="vlan21-if" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9807] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9809] audit: op="connection-add" uuid="550bda04-7f08-4470-b6be-0d94b7fdd799" name="vlan22-if" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9825] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9826] audit: op="connection-add" uuid="45593d9f-ff07-464b-939b-e5a9bc1f4ea5" name="vlan23-if" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9839] audit: op="connection-delete" uuid="22966868-29c6-340d-be5e-bba5c29bb571" name="Wired connection 1" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9853] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9858] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9866] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9869] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f3a1b4c8-6898-43c8-a145-cff6493db8d5)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9869] audit: op="connection-activate" uuid="f3a1b4c8-6898-43c8-a145-cff6493db8d5" name="br-ex-br" pid=51695 uid=0 result="success"
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9871] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9872] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9876] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9880] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (89645a98-362a-4a90-ad96-b42765a6e74e)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9882] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9883] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9887] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9891] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9803fa93-e62a-4987-9a66-8739bb27254a)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9893] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9894] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9899] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9903] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (784088bf-7c7d-46bf-a830-24509eb2750b)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9904] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9905] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9910] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9914] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (56cde7b4-6262-426f-956e-5f1c36f70304)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9916] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9917] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9921] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9925] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ca2d6919-0e35-4f6c-b239-db4784ee9143)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9927] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9928] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9933] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9937] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (1521fdf4-2e0c-41c9-bc78-9fc63a3e68f3)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9939] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9941] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9943] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9950] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <warn>  [1769088443.9951] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9953] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9958] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ed8ab3e7-d1ec-48db-ab69-d8b86554973c)
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9958] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9962] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9963] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9965] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9966] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9987] device (eth1): disconnecting for new activation request.
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9988] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9991] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9993] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9995] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 08:27:23 np0005592158 NetworkManager[48926]: <info>  [1769088443.9998] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <warn>  [1769088444.0000] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0003] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0009] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3899566f-b038-40e1-8d3a-797a9203ea2d)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0010] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0013] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0016] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0017] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0021] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <warn>  [1769088444.0022] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0026] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0032] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (28614ea6-7886-4bf9-9d35-18dd738908ba)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0033] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0036] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0038] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0040] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0044] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <warn>  [1769088444.0045] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0049] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0054] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (550bda04-7f08-4470-b6be-0d94b7fdd799)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0055] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0059] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0061] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0063] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0067] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <warn>  [1769088444.0068] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0072] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0078] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (45593d9f-ff07-464b-939b-e5a9bc1f4ea5)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0079] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0082] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0084] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0086] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0089] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0105] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51695 uid=0 result="success"
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0108] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0113] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0115] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0124] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0130] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0135] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0140] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0142] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0147] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0151] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 kernel: ovs-system: entered promiscuous mode
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0168] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0170] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0175] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0181] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 kernel: Timeout policy base is empty
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0185] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0188] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0194] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0199] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0203] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 systemd-udevd[51701]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0205] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0211] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0217] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0218] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0218] dhcp4 (eth0): state changed no lease
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0221] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 08:27:24 np0005592158 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0235] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0239] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51695 uid=0 result="fail" reason="Device is not activated"
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0283] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0292] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0295] dhcp4 (eth0): state changed new lease, address=38.102.83.119
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0299] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0304] device (eth1): disconnecting for new activation request.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0305] audit: op="connection-activate" uuid="ca5780bd-10f2-5d02-a1d0-e241b484666f" name="ci-private-network" pid=51695 uid=0 result="success"
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0353] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0387] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0388] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0578] device (eth1): Activation: starting connection 'ci-private-network' (ca5780bd-10f2-5d02-a1d0-e241b484666f)
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0586] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0596] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0601] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0611] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0615] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0620] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0622] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0624] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0626] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0627] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0629] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0639] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0646] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0649] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0652] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0654] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0660] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0684] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0690] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0696] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0701] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0704] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0709] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0713] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0721] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0729] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0775] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0778] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 kernel: br-ex: entered promiscuous mode
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0798] device (eth1): Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 08:27:24 np0005592158 kernel: vlan22: entered promiscuous mode
Jan 22 08:27:24 np0005592158 systemd-udevd[51699]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592158 kernel: vlan20: entered promiscuous mode
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0967] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.0984] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 systemd-udevd[51700]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1013] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1016] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 kernel: vlan23: entered promiscuous mode
Jan 22 08:27:24 np0005592158 systemd-udevd[51807]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1032] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 kernel: vlan21: entered promiscuous mode
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1121] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1138] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1150] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1163] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1194] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1201] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1202] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1208] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1214] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1220] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1225] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1237] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1290] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1291] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1294] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1300] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1318] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1353] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1354] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592158 NetworkManager[48926]: <info>  [1769088444.1359] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592158 NetworkManager[48926]: <info>  [1769088445.2683] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 08:27:25 np0005592158 NetworkManager[48926]: <info>  [1769088445.5249] checkpoint[0x557f129cd950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 08:27:25 np0005592158 NetworkManager[48926]: <info>  [1769088445.5253] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51695 uid=0 result="success"
Jan 22 08:27:25 np0005592158 python3.9[52056]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:25 np0005592158 NetworkManager[48926]: <info>  [1769088445.8283] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 08:27:25 np0005592158 NetworkManager[48926]: <info>  [1769088445.8296] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.4681] audit: op="networking-control" arg="global-dns-configuration" pid=51695 uid=0 result="success"
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.4818] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.5156] audit: op="networking-control" arg="global-dns-configuration" pid=51695 uid=0 result="success"
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.5990] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.7721] checkpoint[0x557f129cda20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 08:27:26 np0005592158 NetworkManager[48926]: <info>  [1769088446.7727] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51695 uid=0 result="success"
Jan 22 08:27:26 np0005592158 ansible-async_wrapper.py[51692]: 51693 still running (300)
Jan 22 08:27:26 np0005592158 ansible-async_wrapper.py[51693]: Module complete (51693)
Jan 22 08:27:29 np0005592158 python3.9[52162]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:29 np0005592158 python3.9[52262]: ansible-ansible.legacy.async_status Invoked with jid=j277768451889.51689 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 08:27:30 np0005592158 python3.9[52414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:31 np0005592158 python3.9[52537]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088450.159991-927-57547827339727/.source.returncode _original_basename=.95myxv3s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:31 np0005592158 ansible-async_wrapper.py[51692]: Done in kid B.
Jan 22 08:27:32 np0005592158 python3.9[52689]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:32 np0005592158 python3.9[52813]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088451.8722432-975-49227606299265/.source.cfg _original_basename=.6qp1vol0 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:33 np0005592158 python3.9[52965]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:27:33 np0005592158 systemd[1]: Reloading Network Manager...
Jan 22 08:27:34 np0005592158 NetworkManager[48926]: <info>  [1769088454.0212] audit: op="reload" arg="0" pid=52969 uid=0 result="success"
Jan 22 08:27:34 np0005592158 NetworkManager[48926]: <info>  [1769088454.0223] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 08:27:34 np0005592158 systemd[1]: Reloaded Network Manager.
Jan 22 08:27:35 np0005592158 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 08:27:35 np0005592158 systemd[1]: session-11.scope: Consumed 59.059s CPU time.
Jan 22 08:27:35 np0005592158 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Jan 22 08:27:35 np0005592158 systemd-logind[787]: Removed session 11.
Jan 22 08:27:40 np0005592158 systemd-logind[787]: New session 12 of user zuul.
Jan 22 08:27:40 np0005592158 systemd[1]: Started Session 12 of User zuul.
Jan 22 08:27:41 np0005592158 python3.9[53153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:42 np0005592158 python3.9[53308]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:44 np0005592158 python3.9[53501]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:27:44 np0005592158 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:44 np0005592158 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 08:27:44 np0005592158 systemd[1]: session-12.scope: Consumed 2.459s CPU time.
Jan 22 08:27:44 np0005592158 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Jan 22 08:27:44 np0005592158 systemd-logind[787]: Removed session 12.
Jan 22 08:27:50 np0005592158 systemd-logind[787]: New session 13 of user zuul.
Jan 22 08:27:50 np0005592158 systemd[1]: Started Session 13 of User zuul.
Jan 22 08:27:51 np0005592158 python3.9[53683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:52 np0005592158 python3.9[53838]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:53 np0005592158 python3.9[53994]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:54 np0005592158 python3.9[54078]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:57 np0005592158 python3.9[54232]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:59 np0005592158 python3.9[54427]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:00 np0005592158 python3.9[54579]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:00 np0005592158 podman[54580]: 2026-01-22 13:28:00.221904366 +0000 UTC m=+0.069431488 system refresh
Jan 22 08:28:01 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:28:01 np0005592158 python3.9[54741]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:02 np0005592158 python3.9[54864]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088480.9074833-197-277852061083688/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f4ccbdd6e115f5848572a062f4ef89a06a1003e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:03 np0005592158 python3.9[55016]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:03 np0005592158 python3.9[55139]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088482.6561015-242-126169276907312/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a3e69bacb50e2daad69ea0ffc6501536059b061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:04 np0005592158 python3.9[55291]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:05 np0005592158 python3.9[55443]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592158 python3.9[55595]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592158 python3.9[55747]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:07 np0005592158 python3.9[55899]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:10 np0005592158 python3.9[56052]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:11 np0005592158 python3.9[56206]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:11 np0005592158 python3.9[56358]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:12 np0005592158 python3.9[56510]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:13 np0005592158 python3.9[56663]: ansible-service_facts Invoked
Jan 22 08:28:13 np0005592158 network[56680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:28:13 np0005592158 network[56681]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:28:13 np0005592158 network[56682]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:28:20 np0005592158 python3.9[57134]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:23 np0005592158 python3.9[57287]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:28:25 np0005592158 python3.9[57439]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:26 np0005592158 python3.9[57564]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088504.8943295-674-184524665739314/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:26 np0005592158 python3.9[57718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:27 np0005592158 python3.9[57843]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088506.425851-720-69465525638892/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:29 np0005592158 python3.9[57997]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:31 np0005592158 python3.9[58151]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:32 np0005592158 python3.9[58235]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:28:34 np0005592158 python3.9[58389]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:34 np0005592158 python3.9[58473]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:28:34 np0005592158 chronyd[807]: chronyd exiting
Jan 22 08:28:34 np0005592158 systemd[1]: Stopping NTP client/server...
Jan 22 08:28:34 np0005592158 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 08:28:34 np0005592158 systemd[1]: Stopped NTP client/server.
Jan 22 08:28:34 np0005592158 systemd[1]: Starting NTP client/server...
Jan 22 08:28:35 np0005592158 chronyd[58482]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 08:28:35 np0005592158 chronyd[58482]: Frequency -26.955 +/- 0.107 ppm read from /var/lib/chrony/drift
Jan 22 08:28:35 np0005592158 chronyd[58482]: Loaded seccomp filter (level 2)
Jan 22 08:28:35 np0005592158 systemd[1]: Started NTP client/server.
Jan 22 08:28:35 np0005592158 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 08:28:35 np0005592158 systemd[1]: session-13.scope: Consumed 27.764s CPU time.
Jan 22 08:28:35 np0005592158 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Jan 22 08:28:35 np0005592158 systemd-logind[787]: Removed session 13.
Jan 22 08:28:41 np0005592158 systemd-logind[787]: New session 14 of user zuul.
Jan 22 08:28:41 np0005592158 systemd[1]: Started Session 14 of User zuul.
Jan 22 08:28:42 np0005592158 python3.9[58663]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:44 np0005592158 python3.9[58815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:45 np0005592158 python3.9[58938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088523.690781-63-93912219811998/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:45 np0005592158 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 08:28:45 np0005592158 systemd[1]: session-14.scope: Consumed 1.704s CPU time.
Jan 22 08:28:45 np0005592158 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Jan 22 08:28:45 np0005592158 systemd-logind[787]: Removed session 14.
Jan 22 08:28:51 np0005592158 systemd-logind[787]: New session 15 of user zuul.
Jan 22 08:28:51 np0005592158 systemd[1]: Started Session 15 of User zuul.
Jan 22 08:28:52 np0005592158 python3.9[59116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:53 np0005592158 python3.9[59272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:55 np0005592158 python3.9[59447]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:55 np0005592158 python3.9[59570]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769088534.2928286-84-245382725081599/.source.json _original_basename=.66ptb2bq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:56 np0005592158 python3.9[59722]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:57 np0005592158 python3.9[59845]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088536.4545789-153-93563072989677/.source _original_basename=.xj51wf45 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:58 np0005592158 python3.9[59997]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:59 np0005592158 python3.9[60149]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:00 np0005592158 python3.9[60272]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088539.0574381-225-258388828749454/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:01 np0005592158 python3.9[60424]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:01 np0005592158 python3.9[60547]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088540.5408535-225-150098069100563/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:02 np0005592158 python3.9[60699]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:03 np0005592158 python3.9[60851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:03 np0005592158 python3.9[60974]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088542.929871-337-249181427114140/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:06 np0005592158 python3.9[61126]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:06 np0005592158 python3.9[61249]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088545.6632912-381-13740477493226/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:07 np0005592158 python3.9[61401]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:07 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592158 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 08:29:08 np0005592158 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 08:29:09 np0005592158 python3.9[61628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:10 np0005592158 python3.9[61751]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088548.9482949-450-125501506428434/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:10 np0005592158 python3.9[61903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:11 np0005592158 python3.9[62026]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088550.2340848-495-214433902483877/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:12 np0005592158 python3.9[62178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:12 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592158 systemd[1]: Starting Create netns directory...
Jan 22 08:29:12 np0005592158 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:29:12 np0005592158 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:29:12 np0005592158 systemd[1]: Finished Create netns directory.
Jan 22 08:29:13 np0005592158 python3.9[62406]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:29:13 np0005592158 network[62423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:29:13 np0005592158 network[62424]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:29:13 np0005592158 network[62425]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:29:20 np0005592158 python3.9[62687]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:20 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:20 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:20 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:20 np0005592158 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 08:29:21 np0005592158 iptables.init[62727]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 08:29:21 np0005592158 iptables.init[62727]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 08:29:21 np0005592158 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 08:29:21 np0005592158 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 08:29:22 np0005592158 python3.9[62924]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:22 np0005592158 python3.9[63078]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:23 np0005592158 systemd[1]: Reloading.
Jan 22 08:29:23 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:23 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:23 np0005592158 systemd[1]: Starting Netfilter Tables...
Jan 22 08:29:23 np0005592158 systemd[1]: Finished Netfilter Tables.
Jan 22 08:29:24 np0005592158 python3.9[63270]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:30 np0005592158 python3.9[63423]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:31 np0005592158 python3.9[63548]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088570.1152885-702-267498966640148/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:32 np0005592158 python3.9[63701]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:29:32 np0005592158 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 08:29:32 np0005592158 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 08:29:33 np0005592158 python3.9[63857]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:34 np0005592158 python3.9[64009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:34 np0005592158 python3.9[64132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088573.6808894-795-45369300012057/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:35 np0005592158 python3.9[64284]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:29:35 np0005592158 systemd[1]: Starting Time & Date Service...
Jan 22 08:29:36 np0005592158 systemd[1]: Started Time & Date Service.
Jan 22 08:29:36 np0005592158 python3.9[64440]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:37 np0005592158 python3.9[64592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:38 np0005592158 python3.9[64715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088577.2663472-900-244669063018342/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:39 np0005592158 python3.9[64867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:39 np0005592158 python3.9[64990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088578.6459265-945-248789055637226/.source.yaml _original_basename=.n05rcse9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:40 np0005592158 python3.9[65142]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:40 np0005592158 python3.9[65265]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088579.943893-990-38286361329198/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:41 np0005592158 python3.9[65417]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:42 np0005592158 python3.9[65570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:43 np0005592158 python3[65723]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:29:44 np0005592158 python3.9[65875]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:44 np0005592158 python3.9[65998]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088583.7906756-1107-181337619762922/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:45 np0005592158 python3.9[66150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:46 np0005592158 python3.9[66273]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088585.1647785-1153-279933507833880/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:47 np0005592158 python3.9[66425]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:47 np0005592158 python3.9[66548]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088586.7150722-1197-75211208544196/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:48 np0005592158 python3.9[66700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:49 np0005592158 python3.9[66823]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088587.9993486-1242-141232962556755/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:49 np0005592158 python3.9[66975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:50 np0005592158 python3.9[67098]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088589.3706844-1287-128441354822573/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:51 np0005592158 python3.9[67250]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:52 np0005592158 python3.9[67402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:53 np0005592158 python3.9[67561]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:53 np0005592158 python3.9[67714]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:54 np0005592158 python3.9[67866]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:55 np0005592158 python3.9[68018]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:56 np0005592158 python3.9[68171]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:57 np0005592158 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 08:29:57 np0005592158 systemd[1]: session-15.scope: Consumed 37.796s CPU time.
Jan 22 08:29:57 np0005592158 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Jan 22 08:29:57 np0005592158 systemd-logind[787]: Removed session 15.
Jan 22 08:30:03 np0005592158 systemd-logind[787]: New session 16 of user zuul.
Jan 22 08:30:03 np0005592158 systemd[1]: Started Session 16 of User zuul.
Jan 22 08:30:05 np0005592158 python3.9[68352]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:30:06 np0005592158 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:30:06 np0005592158 python3.9[68506]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:08 np0005592158 python3.9[68658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:09 np0005592158 python3.9[68810]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012 create=True mode=0644 path=/tmp/ansible.w0oyrl2_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:10 np0005592158 python3.9[68962]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.w0oyrl2_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:10 np0005592158 python3.9[69116]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.w0oyrl2_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:11 np0005592158 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 08:30:11 np0005592158 systemd[1]: session-16.scope: Consumed 3.545s CPU time.
Jan 22 08:30:11 np0005592158 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Jan 22 08:30:11 np0005592158 systemd-logind[787]: Removed session 16.
Jan 22 08:30:17 np0005592158 systemd-logind[787]: New session 17 of user zuul.
Jan 22 08:30:17 np0005592158 systemd[1]: Started Session 17 of User zuul.
Jan 22 08:30:18 np0005592158 python3.9[69294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:19 np0005592158 python3.9[69450]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:30:20 np0005592158 python3.9[69604]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:30:21 np0005592158 python3.9[69757]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:22 np0005592158 python3.9[69910]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:23 np0005592158 python3.9[70064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:24 np0005592158 python3.9[70219]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:24 np0005592158 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 08:30:24 np0005592158 systemd[1]: session-17.scope: Consumed 4.874s CPU time.
Jan 22 08:30:24 np0005592158 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Jan 22 08:30:24 np0005592158 systemd-logind[787]: Removed session 17.
Jan 22 08:30:30 np0005592158 systemd-logind[787]: New session 18 of user zuul.
Jan 22 08:30:30 np0005592158 systemd[1]: Started Session 18 of User zuul.
Jan 22 08:30:31 np0005592158 python3.9[70398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:32 np0005592158 python3.9[70554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:30:33 np0005592158 python3.9[70638]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:30:36 np0005592158 python3.9[70789]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:37 np0005592158 python3.9[70940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:30:38 np0005592158 python3.9[71090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:38 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:30:39 np0005592158 python3.9[71241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:40 np0005592158 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Jan 22 08:30:40 np0005592158 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 08:30:40 np0005592158 systemd[1]: session-18.scope: Consumed 6.420s CPU time.
Jan 22 08:30:40 np0005592158 systemd-logind[787]: Removed session 18.
Jan 22 08:30:44 np0005592158 chronyd[58482]: Selected source 23.159.16.194 (pool.ntp.org)
Jan 22 08:30:48 np0005592158 systemd-logind[787]: New session 19 of user zuul.
Jan 22 08:30:48 np0005592158 systemd[1]: Started Session 19 of User zuul.
Jan 22 08:30:55 np0005592158 python3[72007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:57 np0005592158 python3[72103]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:30:59 np0005592158 python3[72130]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:30:59 np0005592158 python3[72156]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:59 np0005592158 kernel: loop: module loaded
Jan 22 08:30:59 np0005592158 kernel: loop3: detected capacity change from 0 to 14680064
Jan 22 08:31:00 np0005592158 python3[72191]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:31:00 np0005592158 lvm[72194]: PV /dev/loop3 not used.
Jan 22 08:31:00 np0005592158 lvm[72196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592158 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 08:31:00 np0005592158 lvm[72206]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592158 lvm[72206]: VG ceph_vg0 finished
Jan 22 08:31:00 np0005592158 lvm[72203]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 08:31:00 np0005592158 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 08:31:00 np0005592158 python3[72284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:31:01 np0005592158 python3[72357]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088660.569294-37030-243449598562900/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:02 np0005592158 python3[72407]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:31:02 np0005592158 systemd[1]: Reloading.
Jan 22 08:31:02 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:02 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:02 np0005592158 systemd[1]: Starting Ceph OSD losetup...
Jan 22 08:31:02 np0005592158 bash[72447]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 22 08:31:02 np0005592158 systemd[1]: Finished Ceph OSD losetup.
Jan 22 08:31:02 np0005592158 lvm[72449]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:02 np0005592158 lvm[72449]: VG ceph_vg0 finished
Jan 22 08:31:04 np0005592158 python3[72473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:33:27 np0005592158 systemd-logind[787]: New session 20 of user ceph-admin.
Jan 22 08:33:27 np0005592158 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 08:33:27 np0005592158 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 08:33:27 np0005592158 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 08:33:27 np0005592158 systemd[1]: Starting User Manager for UID 42477...
Jan 22 08:33:27 np0005592158 systemd[72521]: Queued start job for default target Main User Target.
Jan 22 08:33:27 np0005592158 systemd[72521]: Created slice User Application Slice.
Jan 22 08:33:27 np0005592158 systemd[72521]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 08:33:27 np0005592158 systemd[72521]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:33:27 np0005592158 systemd[72521]: Reached target Paths.
Jan 22 08:33:27 np0005592158 systemd[72521]: Reached target Timers.
Jan 22 08:33:27 np0005592158 systemd[72521]: Starting D-Bus User Message Bus Socket...
Jan 22 08:33:27 np0005592158 systemd[72521]: Starting Create User's Volatile Files and Directories...
Jan 22 08:33:27 np0005592158 systemd[72521]: Finished Create User's Volatile Files and Directories.
Jan 22 08:33:27 np0005592158 systemd[72521]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:33:27 np0005592158 systemd[72521]: Reached target Sockets.
Jan 22 08:33:27 np0005592158 systemd[72521]: Reached target Basic System.
Jan 22 08:33:27 np0005592158 systemd[72521]: Reached target Main User Target.
Jan 22 08:33:27 np0005592158 systemd[72521]: Startup finished in 124ms.
Jan 22 08:33:27 np0005592158 systemd[1]: Started User Manager for UID 42477.
Jan 22 08:33:27 np0005592158 systemd[1]: Started Session 20 of User ceph-admin.
Jan 22 08:33:27 np0005592158 systemd-logind[787]: New session 22 of user ceph-admin.
Jan 22 08:33:27 np0005592158 systemd[1]: Started Session 22 of User ceph-admin.
Jan 22 08:33:28 np0005592158 systemd-logind[787]: New session 23 of user ceph-admin.
Jan 22 08:33:28 np0005592158 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 08:33:28 np0005592158 systemd-logind[787]: New session 24 of user ceph-admin.
Jan 22 08:33:28 np0005592158 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 08:33:29 np0005592158 systemd-logind[787]: New session 25 of user ceph-admin.
Jan 22 08:33:29 np0005592158 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 08:33:29 np0005592158 systemd-logind[787]: New session 26 of user ceph-admin.
Jan 22 08:33:29 np0005592158 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 08:33:29 np0005592158 systemd-logind[787]: New session 27 of user ceph-admin.
Jan 22 08:33:29 np0005592158 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 08:33:30 np0005592158 systemd-logind[787]: New session 28 of user ceph-admin.
Jan 22 08:33:30 np0005592158 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 08:33:30 np0005592158 systemd-logind[787]: New session 29 of user ceph-admin.
Jan 22 08:33:30 np0005592158 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 08:33:31 np0005592158 systemd-logind[787]: New session 30 of user ceph-admin.
Jan 22 08:33:31 np0005592158 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 08:33:31 np0005592158 systemd-logind[787]: New session 31 of user ceph-admin.
Jan 22 08:33:31 np0005592158 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 08:33:32 np0005592158 systemd-logind[787]: New session 32 of user ceph-admin.
Jan 22 08:33:32 np0005592158 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 08:33:32 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:33 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:33 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:33 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:34 np0005592158 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73493 (sysctl)
Jan 22 08:33:34 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:34 np0005592158 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 08:33:34 np0005592158 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 08:33:35 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:36 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:36 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:40 np0005592158 systemd[1]: var-lib-containers-storage-overlay-compat827076912-merged.mount: Deactivated successfully.
Jan 22 08:33:41 np0005592158 systemd[1]: var-lib-containers-storage-overlay-compat827076912-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.127069829 +0000 UTC m=+22.895917197 container create d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.086232759 +0000 UTC m=+22.855080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:59 np0005592158 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck537121439-merged.mount: Deactivated successfully.
Jan 22 08:33:59 np0005592158 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 08:33:59 np0005592158 systemd[1]: Started libpod-conmon-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope.
Jan 22 08:33:59 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.420002464 +0000 UTC m=+23.188849862 container init d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.429513355 +0000 UTC m=+23.198360723 container start d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:33:59 np0005592158 peaceful_euclid[73830]: 167 167
Jan 22 08:33:59 np0005592158 systemd[1]: libpod-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope: Deactivated successfully.
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.461560128 +0000 UTC m=+23.230407496 container attach d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.46239247 +0000 UTC m=+23.231239838 container died d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:33:59 np0005592158 systemd[1]: var-lib-containers-storage-overlay-25c20e2f4a20ebb79d6474409308ed6b3b66bf56e2d223ddb697681e8577d2bd-merged.mount: Deactivated successfully.
Jan 22 08:33:59 np0005592158 podman[73768]: 2026-01-22 13:33:59.526993749 +0000 UTC m=+23.295841127 container remove d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_euclid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:33:59 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:33:59 np0005592158 systemd[1]: libpod-conmon-d78b4536326afe498ba7aa82ad00a4cbac8cd405f9e96a708a1533bd79e13af0.scope: Deactivated successfully.
Jan 22 08:33:59 np0005592158 podman[73856]: 2026-01-22 13:33:59.710608563 +0000 UTC m=+0.048500905 container create 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:33:59 np0005592158 systemd[1]: Started libpod-conmon-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope.
Jan 22 08:33:59 np0005592158 podman[73856]: 2026-01-22 13:33:59.689105261 +0000 UTC m=+0.026997623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:33:59 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:33:59 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:59 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:33:59 np0005592158 podman[73856]: 2026-01-22 13:33:59.829998399 +0000 UTC m=+0.167890751 container init 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:59 np0005592158 podman[73856]: 2026-01-22 13:33:59.838344709 +0000 UTC m=+0.176237041 container start 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:33:59 np0005592158 podman[73856]: 2026-01-22 13:33:59.858116393 +0000 UTC m=+0.196008755 container attach 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]: [
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:    {
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "available": false,
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "ceph_device": false,
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "lsm_data": {},
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "lvs": [],
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "path": "/dev/sr0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "rejected_reasons": [
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "Has a FileSystem",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "Insufficient space (<5GB)"
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        ],
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        "sys_api": {
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "actuators": null,
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "device_nodes": "sr0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "devname": "sr0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "human_readable_size": "482.00 KB",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "id_bus": "ata",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "model": "QEMU DVD-ROM",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "nr_requests": "2",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "parent": "/dev/sr0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "partitions": {},
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "path": "/dev/sr0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "removable": "1",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "rev": "2.5+",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "ro": "0",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "rotational": "1",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "sas_address": "",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "sas_device_handle": "",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "scheduler_mode": "mq-deadline",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "sectors": 0,
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "sectorsize": "2048",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "size": 493568.0,
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "support_discard": "2048",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "type": "disk",
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:            "vendor": "QEMU"
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:        }
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]:    }
Jan 22 08:34:01 np0005592158 blissful_nightingale[73872]: ]
Jan 22 08:34:01 np0005592158 systemd[1]: libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Deactivated successfully.
Jan 22 08:34:01 np0005592158 systemd[1]: libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Consumed 1.264s CPU time.
Jan 22 08:34:01 np0005592158 conmon[73872]: conmon 7b9336ec8aee2184619f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope/container/memory.events
Jan 22 08:34:01 np0005592158 podman[73856]: 2026-01-22 13:34:01.105006876 +0000 UTC m=+1.442899208 container died 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:07 np0005592158 systemd[1]: var-lib-containers-storage-overlay-5e0a331e3b8cffc667eb10bbc2221287d9ce896b52028de1d6b14dac5e57b174-merged.mount: Deactivated successfully.
Jan 22 08:34:07 np0005592158 podman[73856]: 2026-01-22 13:34:07.475679698 +0000 UTC m=+7.813572030 container remove 7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_nightingale, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:07 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:07 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:07 np0005592158 systemd[1]: libpod-conmon-7b9336ec8aee2184619f59909dda5b47797c109fd43920936e36cea8d7e36670.scope: Deactivated successfully.
Jan 22 08:34:12 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:12 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:12 np0005592158 podman[76786]: 2026-01-22 13:34:12.837393936 +0000 UTC m=+0.022030778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.753897493 +0000 UTC m=+0.938534335 container create ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:34:13 np0005592158 systemd[1]: Started libpod-conmon-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope.
Jan 22 08:34:13 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.847191901 +0000 UTC m=+1.031828733 container init ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.856111286 +0000 UTC m=+1.040748098 container start ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.859965253 +0000 UTC m=+1.044602085 container attach ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:13 np0005592158 brave_ishizaka[76802]: 167 167
Jan 22 08:34:13 np0005592158 systemd[1]: libpod-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope: Deactivated successfully.
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.864221409 +0000 UTC m=+1.048858221 container died ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:13 np0005592158 systemd[1]: var-lib-containers-storage-overlay-eb0427dc426b7c1d1c34bb6530721b1c306ad9cf8bc78f6cb375c86ac002f5b6-merged.mount: Deactivated successfully.
Jan 22 08:34:13 np0005592158 podman[76786]: 2026-01-22 13:34:13.90491523 +0000 UTC m=+1.089552042 container remove ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:34:13 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:13 np0005592158 systemd[1]: libpod-conmon-ed033454b7044fcba96ba5596c7066bbbeeb2e0d386e4f9326334476f76c739c.scope: Deactivated successfully.
Jan 22 08:34:13 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:14 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:14 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:14 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:14 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:14 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:14 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:14 np0005592158 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 08:34:14 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:14 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:14 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:14 np0005592158 systemd[1]: Reached target Ceph cluster 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:34:15 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:15 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:15 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:15 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:15 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:15 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:16 np0005592158 systemd[1]: Created slice Slice /system/ceph-088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:34:16 np0005592158 systemd[1]: Reached target System Time Set.
Jan 22 08:34:16 np0005592158 systemd[1]: Reached target System Time Synchronized.
Jan 22 08:34:16 np0005592158 systemd[1]: Starting Ceph crash.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:34:16 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:16 np0005592158 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:16 np0005592158 podman[77059]: 2026-01-22 13:34:16.513621768 +0000 UTC m=+0.079716536 container create 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:34:16 np0005592158 podman[77059]: 2026-01-22 13:34:16.456426854 +0000 UTC m=+0.022521632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:16 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:16 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:16 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124975d8d98d08adf71407b0905c5f28b574dc10075759ae16d1ad1373565dba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:16 np0005592158 podman[77059]: 2026-01-22 13:34:16.628197752 +0000 UTC m=+0.194292530 container init 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:34:16 np0005592158 podman[77059]: 2026-01-22 13:34:16.63614004 +0000 UTC m=+0.202234798 container start 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:16 np0005592158 bash[77059]: 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891
Jan 22 08:34:16 np0005592158 systemd[1]: Started Ceph crash.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:34:16 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.114+0000 7f9412032640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.114+0000 7f9412032640 -1 AuthRegistry(0x7f940c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.116+0000 7f9412032640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.116+0000 7f9412032640 -1 AuthRegistry(0x7f9412031000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.119+0000 7f940b7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: 2026-01-22T13:34:17.119+0000 7f9412032640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 08:34:17 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1[77074]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.517820979 +0000 UTC m=+0.048594778 container create f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.495127034 +0000 UTC m=+0.025900853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:17 np0005592158 systemd[1]: Started libpod-conmon-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope.
Jan 22 08:34:17 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.649262687 +0000 UTC m=+0.180036506 container init f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.658021948 +0000 UTC m=+0.188795747 container start f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.66133341 +0000 UTC m=+0.192107229 container attach f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:34:17 np0005592158 lucid_mendeleev[77247]: 167 167
Jan 22 08:34:17 np0005592158 systemd[1]: libpod-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope: Deactivated successfully.
Jan 22 08:34:17 np0005592158 conmon[77247]: conmon f54dc757e4f669d313db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope/container/memory.events
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.666388388 +0000 UTC m=+0.197162187 container died f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 22 08:34:17 np0005592158 systemd[1]: var-lib-containers-storage-overlay-6a2ad9523f7099c64b2e8593c7b7b78e6e56dac441f4159f535cc1a1fd4c17e4-merged.mount: Deactivated successfully.
Jan 22 08:34:17 np0005592158 podman[77230]: 2026-01-22 13:34:17.701906867 +0000 UTC m=+0.232680676 container remove f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:34:17 np0005592158 systemd[1]: libpod-conmon-f54dc757e4f669d313dbfc02001cc6f71bb79b0fcdd02cfdd51e40af953eedb3.scope: Deactivated successfully.
Jan 22 08:34:17 np0005592158 podman[77270]: 2026-01-22 13:34:17.878990791 +0000 UTC m=+0.048019403 container create 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:34:17 np0005592158 systemd[1]: Started libpod-conmon-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope.
Jan 22 08:34:17 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:17 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:17 np0005592158 podman[77270]: 2026-01-22 13:34:17.857543301 +0000 UTC m=+0.026571923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:17 np0005592158 podman[77270]: 2026-01-22 13:34:17.964012002 +0000 UTC m=+0.133040624 container init 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:17 np0005592158 podman[77270]: 2026-01-22 13:34:17.972380122 +0000 UTC m=+0.141408714 container start 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 08:34:17 np0005592158 podman[77270]: 2026-01-22 13:34:17.976001702 +0000 UTC m=+0.145030324 container attach 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:18 np0005592158 festive_hertz[77286]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:34:18 np0005592158 festive_hertz[77286]: --> relative data size: 1.0
Jan 22 08:34:18 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:34:18 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 729e7fcc-4be0-4e65-a251-dac5739e2fea
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 22 08:34:19 np0005592158 lvm[77334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:34:19 np0005592158 lvm[77334]: VG ceph_vg0 finished
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: stderr: got monmap epoch 1
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: --> Creating keyring file for osd.1
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 22 08:34:19 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 729e7fcc-4be0-4e65-a251-dac5739e2fea --setuser ceph --setgroup ceph
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: stderr: 2026-01-22T13:34:19.997+0000 7f009c8c0740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 22 08:34:23 np0005592158 festive_hertz[77286]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 08:34:23 np0005592158 systemd[1]: libpod-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Deactivated successfully.
Jan 22 08:34:23 np0005592158 systemd[1]: libpod-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Consumed 2.559s CPU time.
Jan 22 08:34:23 np0005592158 podman[77270]: 2026-01-22 13:34:23.670897141 +0000 UTC m=+5.839925743 container died 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 08:34:24 np0005592158 systemd[1]: var-lib-containers-storage-overlay-a0d6c15a4a7f4f63e738f0ef42a1ac9e86ef93fbf8c1ca4bc1d4c717b8e56930-merged.mount: Deactivated successfully.
Jan 22 08:34:24 np0005592158 podman[77270]: 2026-01-22 13:34:24.503116118 +0000 UTC m=+6.672145000 container remove 6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:34:24 np0005592158 systemd[1]: libpod-conmon-6b700c4540a32dcb0005f9c8bca0fe2cf8040f203bd4d7a15f043e990c25debf.scope: Deactivated successfully.
Jan 22 08:34:25 np0005592158 podman[78393]: 2026-01-22 13:34:25.151816294 +0000 UTC m=+0.023640571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.018912762 +0000 UTC m=+0.890737019 container create dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:34:26 np0005592158 systemd[1]: Started libpod-conmon-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope.
Jan 22 08:34:26 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.560183852 +0000 UTC m=+1.432008129 container init dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.570492986 +0000 UTC m=+1.442317263 container start dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.574916117 +0000 UTC m=+1.446740374 container attach dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 22 08:34:26 np0005592158 friendly_ardinghelli[78409]: 167 167
Jan 22 08:34:26 np0005592158 systemd[1]: libpod-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope: Deactivated successfully.
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.577468957 +0000 UTC m=+1.449293214 container died dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:26 np0005592158 systemd[1]: var-lib-containers-storage-overlay-218f37fce0ac8e1bba67247afa38d0110c9ac1ef69c00640cfa87a1dc8f0437c-merged.mount: Deactivated successfully.
Jan 22 08:34:26 np0005592158 podman[78393]: 2026-01-22 13:34:26.629536031 +0000 UTC m=+1.501360288 container remove dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:34:26 np0005592158 systemd[1]: libpod-conmon-dd334ef1a3ff06dc245ed5034b7bea1bd177e455616da17a73579ddcf88e67eb.scope: Deactivated successfully.
Jan 22 08:34:26 np0005592158 podman[78431]: 2026-01-22 13:34:26.80463629 +0000 UTC m=+0.044073403 container create 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:26 np0005592158 systemd[1]: Started libpod-conmon-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope.
Jan 22 08:34:26 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:26 np0005592158 podman[78431]: 2026-01-22 13:34:26.785962587 +0000 UTC m=+0.025399720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:26 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:26 np0005592158 podman[78431]: 2026-01-22 13:34:26.897767545 +0000 UTC m=+0.137204678 container init 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 08:34:26 np0005592158 podman[78431]: 2026-01-22 13:34:26.906885615 +0000 UTC m=+0.146322728 container start 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:26 np0005592158 podman[78431]: 2026-01-22 13:34:26.910525645 +0000 UTC m=+0.149962778 container attach 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]: {
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:    "1": [
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:        {
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "devices": [
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "/dev/loop3"
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            ],
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "lv_name": "ceph_lv0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "lv_size": "7511998464",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=729e7fcc-4be0-4e65-a251-dac5739e2fea,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "lv_uuid": "8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "name": "ceph_lv0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "tags": {
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.block_uuid": "8FXlZP-7Oop-LAub-ofen-l1Hk-nciS-XBs6Yr",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.cluster_name": "ceph",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.crush_device_class": "",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.encrypted": "0",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.osd_fsid": "729e7fcc-4be0-4e65-a251-dac5739e2fea",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.osd_id": "1",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.type": "block",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:                "ceph.vdo": "0"
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            },
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "type": "block",
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:            "vg_name": "ceph_vg0"
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:        }
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]:    ]
Jan 22 08:34:27 np0005592158 suspicious_maxwell[78447]: }
Jan 22 08:34:27 np0005592158 systemd[1]: libpod-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope: Deactivated successfully.
Jan 22 08:34:27 np0005592158 podman[78456]: 2026-01-22 13:34:27.743601287 +0000 UTC m=+0.030508141 container died 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:27 np0005592158 systemd[1]: var-lib-containers-storage-overlay-20d27fad29194d3c2350c690f25d854db493df8bd6b32364525e64abf90adde4-merged.mount: Deactivated successfully.
Jan 22 08:34:27 np0005592158 podman[78456]: 2026-01-22 13:34:27.82618262 +0000 UTC m=+0.113089474 container remove 229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:27 np0005592158 systemd[1]: libpod-conmon-229c0ecff17cfdf4bfa7a8d26e6eaa2f3d3c175171049d210b989519aefd095f.scope: Deactivated successfully.
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.498015533 +0000 UTC m=+0.039659962 container create 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:28 np0005592158 systemd[1]: Started libpod-conmon-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope.
Jan 22 08:34:28 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.48155767 +0000 UTC m=+0.023202129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.772799076 +0000 UTC m=+0.314443535 container init 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.780854968 +0000 UTC m=+0.322499407 container start 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 08:34:28 np0005592158 wizardly_sammet[78628]: 167 167
Jan 22 08:34:28 np0005592158 systemd[1]: libpod-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope: Deactivated successfully.
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.809727453 +0000 UTC m=+0.351371922 container attach 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.810811864 +0000 UTC m=+0.352456303 container died 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:28 np0005592158 systemd[1]: var-lib-containers-storage-overlay-74ca805fe3d2873be071169c81c24b6292979b46fabff89c2ba3aedc624f3990-merged.mount: Deactivated successfully.
Jan 22 08:34:28 np0005592158 podman[78611]: 2026-01-22 13:34:28.8698979 +0000 UTC m=+0.411542339 container remove 0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 22 08:34:28 np0005592158 systemd[1]: libpod-conmon-0b139ea50a9b294d86eda64c7dff4138a99f6e009fda5659e0e84d70622557bc.scope: Deactivated successfully.
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.128908839 +0000 UTC m=+0.046445329 container create 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:29 np0005592158 systemd[1]: Started libpod-conmon-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope.
Jan 22 08:34:29 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.205160368 +0000 UTC m=+0.122696878 container init 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.110279357 +0000 UTC m=+0.027815867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.214971738 +0000 UTC m=+0.132508228 container start 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.219171164 +0000 UTC m=+0.136707654 container attach 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:29 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 22 08:34:29 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]:                            [--no-systemd] [--no-tmpfs]
Jan 22 08:34:29 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test[78679]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 08:34:29 np0005592158 systemd[1]: libpod-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope: Deactivated successfully.
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.894010329 +0000 UTC m=+0.811546819 container died 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:29 np0005592158 systemd[1]: var-lib-containers-storage-overlay-8557d6638de90ac4f208d817743af41ab74aa0f9994c1a781c5a7f029467c74d-merged.mount: Deactivated successfully.
Jan 22 08:34:29 np0005592158 podman[78662]: 2026-01-22 13:34:29.947355808 +0000 UTC m=+0.864892298 container remove 66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate-test, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 08:34:29 np0005592158 systemd[1]: libpod-conmon-66f760e623d530e7b921c508cf6f38e1f40372532fce45893a69ae1d6816f38b.scope: Deactivated successfully.
Jan 22 08:34:30 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:30 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:30 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:30 np0005592158 systemd[1]: Reloading.
Jan 22 08:34:30 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:34:30 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:34:30 np0005592158 systemd[1]: Starting Ceph osd.1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:34:30 np0005592158 podman[78839]: 2026-01-22 13:34:30.97807814 +0000 UTC m=+0.040900356 container create 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:34:31 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:31 np0005592158 podman[78839]: 2026-01-22 13:34:30.95952207 +0000 UTC m=+0.022344306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:31 np0005592158 podman[78839]: 2026-01-22 13:34:31.236004219 +0000 UTC m=+0.298826465 container init 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:31 np0005592158 podman[78839]: 2026-01-22 13:34:31.243379853 +0000 UTC m=+0.306202069 container start 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 08:34:31 np0005592158 podman[78839]: 2026-01-22 13:34:31.247122195 +0000 UTC m=+0.309944411 container attach 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:32 np0005592158 bash[78839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 22 08:34:32 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate[78855]: --> ceph-volume raw activate successful for osd ID: 1
Jan 22 08:34:32 np0005592158 bash[78839]: --> ceph-volume raw activate successful for osd ID: 1
Jan 22 08:34:32 np0005592158 systemd[1]: libpod-42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03.scope: Deactivated successfully.
Jan 22 08:34:32 np0005592158 systemd[1]: libpod-42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03.scope: Consumed 1.014s CPU time.
Jan 22 08:34:32 np0005592158 podman[78966]: 2026-01-22 13:34:32.306894628 +0000 UTC m=+0.037848763 container died 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 22 08:34:32 np0005592158 systemd[1]: var-lib-containers-storage-overlay-2bdc69c1063553f26b2e46a03bc70769920f3023299b32777aed5239ee8e46ef-merged.mount: Deactivated successfully.
Jan 22 08:34:32 np0005592158 podman[78966]: 2026-01-22 13:34:32.365600564 +0000 UTC m=+0.096554699 container remove 42330ada9b6596ab3aebd98d8240b2c4b111fd4f3877c0b4cebd16ae08cd9b03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:32 np0005592158 podman[79025]: 2026-01-22 13:34:32.554954956 +0000 UTC m=+0.027912260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:32 np0005592158 podman[79025]: 2026-01-22 13:34:32.745056299 +0000 UTC m=+0.218013553 container create a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:32 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:32 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:32 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:32 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:32 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44cff825b2cfca00a0461212a632bf0ec4c43d328399463f38e09f996b63048/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:32 np0005592158 podman[79025]: 2026-01-22 13:34:32.814093619 +0000 UTC m=+0.287050883 container init a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:34:32 np0005592158 podman[79025]: 2026-01-22 13:34:32.823446556 +0000 UTC m=+0.296403810 container start a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:34:32 np0005592158 bash[79025]: a71bbb89b63e61ca8483c9344777a8412cac7a4405a697d4e42f7d1ed608e69e
Jan 22 08:34:32 np0005592158 systemd[1]: Started Ceph osd.1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: pidfile_write: ignore empty --pid-file
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 08:34:32 np0005592158 ceph-osd[79044]: bdev(0x55b6f15a7800 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f076f800 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: load: jerasure load: lrc 
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.472653127 +0000 UTC m=+0.027426106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.677371372 +0000 UTC m=+0.232144321 container create a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 22 08:34:33 np0005592158 systemd[1]: Started libpod-conmon-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope.
Jan 22 08:34:33 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.790501956 +0000 UTC m=+0.345274905 container init a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.798836205 +0000 UTC m=+0.353609154 container start a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.802116605 +0000 UTC m=+0.356889594 container attach a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:34:33 np0005592158 practical_euler[79222]: 167 167
Jan 22 08:34:33 np0005592158 systemd[1]: libpod-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope: Deactivated successfully.
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.806977359 +0000 UTC m=+0.361750318 container died a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:33 np0005592158 systemd[1]: var-lib-containers-storage-overlay-e8742766199166f07ebecfc350152b22c5fdd5fcb68f14f9b0b79882f33183a7-merged.mount: Deactivated successfully.
Jan 22 08:34:33 np0005592158 podman[79202]: 2026-01-22 13:34:33.839977278 +0000 UTC m=+0.394750227 container remove a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:33 np0005592158 systemd[1]: libpod-conmon-a4abdb5f0a5c6845c02cb21757c1564167c39fe9eefbe6a2e8539aa6264a0cb2.scope: Deactivated successfully.
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1628c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluefs mount
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluefs mount shared_bdev_used = 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Git sha 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: DB SUMMARY
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: DB Session ID:  00CFL0TF3NW7HAFMQXJB
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                                     Options.env: 0x55b6f15f9c70
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                                Options.info_log: 0x55b6f07ecba0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.write_buffer_manager: 0x55b6f1702460
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.row_cache: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                              Options.wal_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.wal_compression: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Compression algorithms supported:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kZSTD supported: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:33 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e2430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b67f644f-16fd-42e9-98f5-fc9e121c20ca
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088873976203, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088873976456, "job": 1, "event": "recovery_finished"}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: freelist init
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: freelist _read_cfg
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluefs umount
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) close
Jan 22 08:34:34 np0005592158 podman[79246]: 2026-01-22 13:34:34.05840422 +0000 UTC m=+0.106449791 container create 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 22 08:34:34 np0005592158 podman[79246]: 2026-01-22 13:34:33.97993137 +0000 UTC m=+0.027976931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:34 np0005592158 systemd[1]: Started libpod-conmon-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope.
Jan 22 08:34:34 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:34 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:34 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:34 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:34 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:34 np0005592158 podman[79246]: 2026-01-22 13:34:34.188904892 +0000 UTC m=+0.236950473 container init 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:34 np0005592158 podman[79246]: 2026-01-22 13:34:34.197435837 +0000 UTC m=+0.245481388 container start 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:34:34 np0005592158 podman[79246]: 2026-01-22 13:34:34.201238532 +0000 UTC m=+0.249284083 container attach 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bdev(0x55b6f1629400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluefs mount
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluefs mount shared_bdev_used = 4718592
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Git sha 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: DB SUMMARY
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: DB Session ID:  00CFL0TF3NW7HAFMQXJA
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                                     Options.env: 0x55b6f082e380
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                                Options.info_log: 0x55b6f07ed460
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.write_buffer_manager: 0x55b6f1702460
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.row_cache: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                              Options.wal_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.wal_compression: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Compression algorithms supported:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kZSTD supported: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07f68a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:           Options.merge_operator: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6f07ec320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6f07e3770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.compression: LZ4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.num_levels: 7
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b67f644f-16fd-42e9-98f5-fc9e121c20ca
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874253111, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874258050, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874261802, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874265174, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b67f644f-16fd-42e9-98f5-fc9e121c20ca", "db_session_id": "00CFL0TF3NW7HAFMQXJA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088874266609, "job": 1, "event": "recovery_finished"}
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b6f17c9c00
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: DB pointer 0x55b6f16eba00
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.5 total, 0.5 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 460.80 MB usag
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: _get_class not permitted to load lua
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: _get_class not permitted to load sdk
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: _get_class not permitted to load test_remote_reads
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 load_pgs
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 load_pgs opened 0 pgs
Jan 22 08:34:34 np0005592158 ceph-osd[79044]: osd.1 0 log_to_monitors true
Jan 22 08:34:34 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:34.725+0000 7f53dd1db740 -1 osd.1 0 log_to_monitors true
Jan 22 08:34:35 np0005592158 musing_shockley[79456]: {
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:    "729e7fcc-4be0-4e65-a251-dac5739e2fea": {
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:        "osd_id": 1,
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:        "osd_uuid": "729e7fcc-4be0-4e65-a251-dac5739e2fea",
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:        "type": "bluestore"
Jan 22 08:34:35 np0005592158 musing_shockley[79456]:    }
Jan 22 08:34:35 np0005592158 musing_shockley[79456]: }
Jan 22 08:34:35 np0005592158 systemd[1]: libpod-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope: Deactivated successfully.
Jan 22 08:34:35 np0005592158 podman[79246]: 2026-01-22 13:34:35.169991008 +0000 UTC m=+1.218036569 container died 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:34:35 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 08:34:35 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 08:34:35 np0005592158 systemd[1]: var-lib-containers-storage-overlay-209b4a7f8fbf554d6dbdccbfaacd8ede958c68c19d6978a9a0de7d0797cfa04c-merged.mount: Deactivated successfully.
Jan 22 08:34:35 np0005592158 podman[79246]: 2026-01-22 13:34:35.910171993 +0000 UTC m=+1.958217534 container remove 6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:35 np0005592158 systemd[1]: libpod-conmon-6edb07908c3277bfd2d8f64a44f05db8af60d31dc916f597e4b9b8c3da5cb0db.scope: Deactivated successfully.
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 done with init, starting boot process
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 start_boot
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 08:34:36 np0005592158 ceph-osd[79044]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 22 08:34:37 np0005592158 podman[79929]: 2026-01-22 13:34:37.394670046 +0000 UTC m=+0.192376247 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:37 np0005592158 podman[79929]: 2026-01-22 13:34:37.625977062 +0000 UTC m=+0.423683233 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.369838484 +0000 UTC m=+0.066429079 container create 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.325525785 +0000 UTC m=+0.022116400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:39 np0005592158 systemd[1]: Started libpod-conmon-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope.
Jan 22 08:34:39 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.518182648 +0000 UTC m=+0.214773263 container init 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.525097968 +0000 UTC m=+0.221688563 container start 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 22 08:34:39 np0005592158 gracious_villani[80266]: 167 167
Jan 22 08:34:39 np0005592158 systemd[1]: libpod-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope: Deactivated successfully.
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.543502935 +0000 UTC m=+0.240093530 container attach 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.543970048 +0000 UTC m=+0.240560643 container died 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:34:39 np0005592158 systemd[1]: var-lib-containers-storage-overlay-8a47091d5d926327a44da33748ab42eaa3fa692d1769ca52a050ce5abe73d215-merged.mount: Deactivated successfully.
Jan 22 08:34:39 np0005592158 podman[80250]: 2026-01-22 13:34:39.676731452 +0000 UTC m=+0.373322087 container remove 1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:39 np0005592158 systemd[1]: libpod-conmon-1e0d892b2aea5aeae840461aa4d7499cf2513cdd0bf7059e8be11a95ee23fc6d.scope: Deactivated successfully.
Jan 22 08:34:39 np0005592158 podman[80288]: 2026-01-22 13:34:39.829419215 +0000 UTC m=+0.049670329 container create e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:34:39 np0005592158 systemd[1]: Started libpod-conmon-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope.
Jan 22 08:34:39 np0005592158 podman[80288]: 2026-01-22 13:34:39.801735593 +0000 UTC m=+0.021986717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:34:39 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:34:39 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:39 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:39 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:39 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:34:39 np0005592158 podman[80288]: 2026-01-22 13:34:39.954781396 +0000 UTC m=+0.175032530 container init e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:39 np0005592158 podman[80288]: 2026-01-22 13:34:39.964612296 +0000 UTC m=+0.184863410 container start e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:34:39 np0005592158 podman[80288]: 2026-01-22 13:34:39.980920725 +0000 UTC m=+0.201171869 container attach e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]: [
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:    {
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "available": false,
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "ceph_device": false,
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "lsm_data": {},
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "lvs": [],
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "path": "/dev/sr0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "rejected_reasons": [
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "Has a FileSystem",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "Insufficient space (<5GB)"
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        ],
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        "sys_api": {
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "actuators": null,
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "device_nodes": "sr0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "devname": "sr0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "human_readable_size": "482.00 KB",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "id_bus": "ata",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "model": "QEMU DVD-ROM",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "nr_requests": "2",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "parent": "/dev/sr0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "partitions": {},
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "path": "/dev/sr0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "removable": "1",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "rev": "2.5+",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "ro": "0",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "rotational": "1",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "sas_address": "",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "sas_device_handle": "",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "scheduler_mode": "mq-deadline",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "sectors": 0,
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "sectorsize": "2048",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "size": 493568.0,
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "support_discard": "2048",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "type": "disk",
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:            "vendor": "QEMU"
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:        }
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]:    }
Jan 22 08:34:41 np0005592158 stoic_robinson[80305]: ]
Jan 22 08:34:41 np0005592158 systemd[1]: libpod-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Deactivated successfully.
Jan 22 08:34:41 np0005592158 podman[80288]: 2026-01-22 13:34:41.141422099 +0000 UTC m=+1.361673223 container died e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:34:41 np0005592158 systemd[1]: libpod-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Consumed 1.170s CPU time.
Jan 22 08:34:41 np0005592158 systemd[1]: var-lib-containers-storage-overlay-6fd3ffcbaf0675f031649a4421495254aa4c27818b3b28b2ece3d279b69fce3b-merged.mount: Deactivated successfully.
Jan 22 08:34:41 np0005592158 podman[80288]: 2026-01-22 13:34:41.24387599 +0000 UTC m=+1.464127104 container remove e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:34:41 np0005592158 systemd[1]: libpod-conmon-e1355aef6cb2723e69820926bf19672938daba45ce48d5c8140d40158d3fbbbc.scope: Deactivated successfully.
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 13.546 iops: 3467.856 elapsed_sec: 0.865
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: log_channel(cluster) log [WRN] : OSD bench result of 3467.855722 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 0 waiting for initial osdmap
Jan 22 08:34:42 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:42.093+0000 7f53d915b640 -1 osd.1 0 waiting for initial osdmap
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 22 08:34:42 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-1[79040]: 2026-01-22T13:34:42.128+0000 7f53d4783640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:34:42 np0005592158 ceph-osd[79044]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 22 08:34:43 np0005592158 ceph-osd[79044]: osd.1 13 state: booting -> active
Jan 22 08:34:43 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 20 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=9.890140533s) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active pruub 26.615266800s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20 pruub=9.890140533s) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown pruub 26.615266800s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.2( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.19( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.7( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.8( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.5( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.6( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.10( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.11( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.12( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.15( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.16( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.3( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.13( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.14( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.17( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.18( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1d( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1e( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1b( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1c( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.9( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.a( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.1f( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 21 pg[2.4( empty local-lis/les=14/15 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=20/22 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:53 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=14/14 les/c/f=15/15/0 sis=20) [1] r=0 lpr=20 pi=[14,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:55 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:34:56 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:34:56 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Jan 22 08:34:56 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Jan 22 08:34:59 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483595848s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.381401062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.485149384s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383068085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483470917s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.381401062s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.485075951s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383068085s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484004974s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382072449s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484086037s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382186890s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484064102s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382186890s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483953476s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382072449s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484031677s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382320404s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483535767s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.381839752s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484002113s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382320404s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483474731s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.381839752s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484015465s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382427216s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484289169s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382717133s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484019279s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382514954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483936310s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382427216s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.483990669s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382514954s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484164238s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382698059s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484164238s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382717133s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484102249s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382698059s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484872818s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383556366s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484283447s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.382980347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484254837s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.382980347s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484431267s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383197784s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484824181s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383556366s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484666824s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383541107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484370232s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383197784s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484643936s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383541107s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484479904s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active pruub 34.383563995s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=28 pruub=8.484447479s) [0] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 34.383563995s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=28) [1] r=0 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:35:03 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Jan 22 08:35:03 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Jan 22 08:35:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 22 08:35:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 22 08:35:06 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 22 08:35:06 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 22 08:35:08 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 22 08:35:08 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 22 08:35:10 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 22 08:35:10 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 22 08:35:12 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 22 08:35:12 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 22 08:35:14 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 22 08:35:14 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 22 08:35:15 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 22 08:35:15 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 22 08:35:16 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 22 08:35:16 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 22 08:35:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Jan 22 08:35:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Jan 22 08:35:20 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Jan 22 08:35:20 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Jan 22 08:35:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 22 08:35:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 22 08:35:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 22 08:35:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 22 08:35:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 22 08:35:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.619604071 +0000 UTC m=+0.047952281 container create 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:35:29 np0005592158 systemd[72521]: Starting Mark boot as successful...
Jan 22 08:35:29 np0005592158 systemd[72521]: Finished Mark boot as successful.
Jan 22 08:35:29 np0005592158 systemd[1]: Started libpod-conmon-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope.
Jan 22 08:35:29 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.597347778 +0000 UTC m=+0.025696018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.692749625 +0000 UTC m=+0.121097825 container init 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.701606126 +0000 UTC m=+0.129954346 container start 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.705373758 +0000 UTC m=+0.133721988 container attach 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:29 np0005592158 clever_goldwasser[81497]: 167 167
Jan 22 08:35:29 np0005592158 systemd[1]: libpod-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope: Deactivated successfully.
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.708329298 +0000 UTC m=+0.136677518 container died 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 22 08:35:29 np0005592158 systemd[1]: var-lib-containers-storage-overlay-780022ee69a15f64b9042e1cc2d8cdaaa6aa70e48d7686e4c571aa7dfed57839-merged.mount: Deactivated successfully.
Jan 22 08:35:29 np0005592158 podman[81479]: 2026-01-22 13:35:29.747512901 +0000 UTC m=+0.175861111 container remove 4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_goldwasser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:29 np0005592158 systemd[1]: libpod-conmon-4eb66358d23373ed7c0adcf6363b70729755da9b6f6d41bfcbafc2631356c64b.scope: Deactivated successfully.
Jan 22 08:35:29 np0005592158 podman[81515]: 2026-01-22 13:35:29.827678046 +0000 UTC m=+0.044685453 container create a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:35:29 np0005592158 systemd[1]: Started libpod-conmon-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope.
Jan 22 08:35:29 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:35:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:29 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:29 np0005592158 podman[81515]: 2026-01-22 13:35:29.806937043 +0000 UTC m=+0.023944530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:29 np0005592158 podman[81515]: 2026-01-22 13:35:29.909794703 +0000 UTC m=+0.126802120 container init a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:29 np0005592158 podman[81515]: 2026-01-22 13:35:29.916286989 +0000 UTC m=+0.133294396 container start a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:29 np0005592158 podman[81515]: 2026-01-22 13:35:29.922116787 +0000 UTC m=+0.139124214 container attach a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:30 np0005592158 systemd[1]: libpod-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope: Deactivated successfully.
Jan 22 08:35:30 np0005592158 podman[81515]: 2026-01-22 13:35:30.007465123 +0000 UTC m=+0.224472560 container died a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:30 np0005592158 systemd[1]: var-lib-containers-storage-overlay-6b19e4b60025e3bedf32e7dec3de7fba2109a9b66f3a0fc49378f66c05baf7e3-merged.mount: Deactivated successfully.
Jan 22 08:35:30 np0005592158 podman[81515]: 2026-01-22 13:35:30.046995195 +0000 UTC m=+0.264002602 container remove a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:30 np0005592158 systemd[1]: libpod-conmon-a7f4fdd36ea3cf1d3f8ddfad3cc3f5cedce46482b5d182014c41e8b4e7771aed.scope: Deactivated successfully.
Jan 22 08:35:30 np0005592158 systemd[1]: Reloading.
Jan 22 08:35:30 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:30 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:30 np0005592158 systemd[1]: Reloading.
Jan 22 08:35:30 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:30 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:30 np0005592158 systemd[1]: Starting Ceph mon.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:35:31 np0005592158 podman[81695]: 2026-01-22 13:35:30.936529182 +0000 UTC m=+0.043731486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:31 np0005592158 podman[81695]: 2026-01-22 13:35:31.492834492 +0000 UTC m=+0.600036706 container create 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:35:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:31 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1caa8f50f0879bad3532cce712a0d881d19081eae018c33d05d80d745a71aacc/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:31 np0005592158 podman[81695]: 2026-01-22 13:35:31.704958446 +0000 UTC m=+0.812160710 container init 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 08:35:31 np0005592158 podman[81695]: 2026-01-22 13:35:31.712420398 +0000 UTC m=+0.819622612 container start 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:31 np0005592158 bash[81695]: 86c62012975c4d3a4f66b2322215389f98408803e87aba4b137aac7442cee7f0
Jan 22 08:35:31 np0005592158 systemd[1]: Started Ceph mon.compute-1 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: pidfile_write: ignore empty --pid-file
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: load: jerasure load: lrc 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Git sha 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: DB SUMMARY
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: DB Session ID:  61AVSUXQ8FJR5Z10R2GN
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                                     Options.env: 0x55f766f40c40
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                                Options.info_log: 0x55f7686b0fc0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                                 Options.wal_dir: 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                    Options.write_buffer_manager: 0x55f7686c0b40
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                               Options.row_cache: None
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                              Options.wal_filter: None
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.wal_compression: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.max_background_jobs: 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Compression algorithms supported:
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kZSTD supported: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:           Options.merge_operator: 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:        Options.compaction_filter: None
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f7686b0c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f7686a91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.compression: NoCompression
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.num_levels: 7
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b45e9535-17c1-4c17-af76-e2f7345eb341
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931764497, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931766494, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088931766643, "job": 1, "event": "recovery_finished"}
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f7686d2e00
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: DB pointer 0x55f76875c000
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(???) e0 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).mds e2 new map
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e32 e32: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 e33: 2 total, 2 up, 2 in
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Deploying daemon mon.compute-2 on compute-2
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 08:35:31 np0005592158 ceph-mon[81715]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Jan 22 08:35:34 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 22 08:35:34 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 22 08:35:35 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 22 08:35:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 22 08:35:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 22 08:35:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 22 08:35:37 np0005592158 ceph-mon[81715]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Jan 22 08:35:37 np0005592158 ceph-mon[81715]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 22 08:35:37 np0005592158 ceph-mon[81715]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 08:35:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 22 08:35:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-22T13:35:29.957405Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-0 calling monitor election
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-2 calling monitor election
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1 calling monitor election
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:41 np0005592158 ceph-mon[81715]:    fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592158 ceph-mon[81715]:    fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1019919786 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:43 np0005592158 ceph-mon[81715]: Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 08:35:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 22 08:35:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.166360765 +0000 UTC m=+0.049356220 container create 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 08:35:44 np0005592158 systemd[1]: Started libpod-conmon-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope.
Jan 22 08:35:44 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.143480555 +0000 UTC m=+0.026476030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.247566128 +0000 UTC m=+0.130561603 container init 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.255346919 +0000 UTC m=+0.138342374 container start 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.258589857 +0000 UTC m=+0.141585312 container attach 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:44 np0005592158 systemd[1]: libpod-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope: Deactivated successfully.
Jan 22 08:35:44 np0005592158 gracious_carver[81909]: 167 167
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.262298607 +0000 UTC m=+0.145294062 container died 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:44 np0005592158 conmon[81909]: conmon 13ea967c5cf8b2f1c6ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope/container/memory.events
Jan 22 08:35:44 np0005592158 systemd[1]: var-lib-containers-storage-overlay-8be87f9607915128146b043fd27fa9d4ef1c37b7d5b71fe47ee4a2fa8ce38499-merged.mount: Deactivated successfully.
Jan 22 08:35:44 np0005592158 podman[81893]: 2026-01-22 13:35:44.296295949 +0000 UTC m=+0.179291404 container remove 13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 22 08:35:44 np0005592158 systemd[1]: libpod-conmon-13ea967c5cf8b2f1c6bae461af2a83ddd3a2c017af4ea8749edfb74fb5ae5ce3.scope: Deactivated successfully.
Jan 22 08:35:44 np0005592158 systemd[1]: Reloading.
Jan 22 08:35:44 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:44 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:44 np0005592158 ceph-mon[81715]: Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 08:35:44 np0005592158 systemd[1]: Reloading.
Jan 22 08:35:44 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:44 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:44 np0005592158 systemd[1]: Starting Ceph mgr.compute-1.hzmatt for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:35:44 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 22 08:35:44 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 22 08:35:45 np0005592158 podman[82053]: 2026-01-22 13:35:45.096866584 +0000 UTC m=+0.043154861 container create 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ea4a8356163e77a780fdcc84d02066fd87cbef8f64b5dafa995880bf2c1845/merged/var/lib/ceph/mgr/ceph-compute-1.hzmatt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:45 np0005592158 podman[82053]: 2026-01-22 13:35:45.157642743 +0000 UTC m=+0.103931050 container init 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:35:45 np0005592158 podman[82053]: 2026-01-22 13:35:45.16342207 +0000 UTC m=+0.109710347 container start 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 22 08:35:45 np0005592158 bash[82053]: 48a673850449621d1412afa74c1be893b279247df84600509ea83b75b992c8df
Jan 22 08:35:45 np0005592158 podman[82053]: 2026-01-22 13:35:45.078940388 +0000 UTC m=+0.025228685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:45 np0005592158 systemd[1]: Started Ceph mgr.compute-1.hzmatt for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: pidfile_write: ignore empty --pid-file
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'alerts'
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:35:45 np0005592158 ceph-mon[81715]: Deploying daemon crash.compute-2 on compute-2
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'balancer'
Jan 22 08:35:45 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:45.642+0000 7fb431a22140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:35:45 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'cephadm'
Jan 22 08:35:45 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:45.923+0000 7fb431a22140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:35:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020052982 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 22 08:35:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 22 08:35:48 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'crash'
Jan 22 08:35:48 np0005592158 ceph-mgr[82073]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'dashboard'
Jan 22 08:35:48 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:48.448+0000 7fb431a22140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 22 08:35:48 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 22 08:35:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e34 e34: 2 total, 2 up, 2 in
Jan 22 08:35:50 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'devicehealth'
Jan 22 08:35:50 np0005592158 ceph-mgr[82073]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:35:50 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 08:35:50 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:50.362+0000 7fb431a22140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:35:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 22 08:35:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 22 08:35:50 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 08:35:50 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 08:35:50 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]:  from numpy import show_config as show_numpy_config
Jan 22 08:35:50 np0005592158 ceph-mgr[82073]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:35:50 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'influx'
Jan 22 08:35:50 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:50.960+0000 7fb431a22140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:35:51 np0005592158 ceph-mgr[82073]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:35:51 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'insights'
Jan 22 08:35:51 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:51.231+0000 7fb431a22140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:35:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'iostat'
Jan 22 08:35:51 np0005592158 ceph-mgr[82073]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:35:51 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'k8sevents'
Jan 22 08:35:51 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:51.763+0000 7fb431a22140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:35:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e35 e35: 2 total, 2 up, 2 in
Jan 22 08:35:53 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'localpool'
Jan 22 08:35:53 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 08:35:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e36 e36: 2 total, 2 up, 2 in
Jan 22 08:35:54 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'mirroring'
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:55 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'nfs'
Jan 22 08:35:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e37 e37: 2 total, 2 up, 2 in
Jan 22 08:35:55 np0005592158 ceph-mgr[82073]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:35:55 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'orchestrator'
Jan 22 08:35:55 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:55.931+0000 7fb431a22140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e38 e38: 2 total, 2 up, 2 in
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 22 08:35:56 np0005592158 ceph-mgr[82073]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:35:56 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 08:35:56 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:56.699+0000 7fb431a22140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:35:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'osd_support'
Jan 22 08:35:57 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.038+0000 7fb431a22140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 08:35:57 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.355+0000 7fb431a22140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'progress'
Jan 22 08:35:57 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.670+0000 7fb431a22140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'prometheus'
Jan 22 08:35:57 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:57.993+0000 7fb431a22140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:35:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:59 np0005592158 ceph-mgr[82073]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:35:59 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:59.213+0000 7fb431a22140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:35:59 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'rbd_support'
Jan 22 08:35:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e40 e40: 3 total, 2 up, 3 in
Jan 22 08:35:59 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=9.400292397s) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 94.091590881s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:35:59 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=9.400292397s) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown pruub 94.091590881s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:35:59 np0005592158 ceph-mgr[82073]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:35:59 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'restful'
Jan 22 08:35:59 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:35:59.549+0000 7fb431a22140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:35:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:59 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 22 08:35:59 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 22 08:36:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 e41: 3 total, 2 up, 3 in
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=40/41 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [1] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:00 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'rgw'
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 22 08:36:00 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 22 08:36:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:01 np0005592158 ceph-mgr[82073]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'rook'
Jan 22 08:36:01 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:01.221+0000 7fb431a22140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:03 np0005592158 ceph-mgr[82073]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:36:03 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'selftest'
Jan 22 08:36:03 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:03.831+0000 7fb431a22140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592158 ceph-mgr[82073]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:04.106+0000 7fb431a22140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'snap_schedule'
Jan 22 08:36:04 np0005592158 ceph-mgr[82073]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'stats'
Jan 22 08:36:04 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:04.425+0000 7fb431a22140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'status'
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.002+0000 7fb431a22140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'telegraf'
Jan 22 08:36:05 np0005592158 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 08:36:05 np0005592158 systemd[1]: session-19.scope: Consumed 9.404s CPU time.
Jan 22 08:36:05 np0005592158 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Jan 22 08:36:05 np0005592158 systemd-logind[787]: Removed session 19.
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'telemetry'
Jan 22 08:36:05 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.269+0000 7fb431a22140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 22 08:36:05 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:05.926+0000 7fb431a22140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:36:05 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 08:36:06 np0005592158 ceph-mgr[82073]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:36:06 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:06.668+0000 7fb431a22140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:36:06 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'volumes'
Jan 22 08:36:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e42 e42: 3 total, 2 up, 3 in
Jan 22 08:36:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592158 ceph-mgr[82073]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:36:07 np0005592158 ceph-mgr[82073]: mgr[py] Loading python module 'zabbix'
Jan 22 08:36:07 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:07.489+0000 7fb431a22140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.e( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1b( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.10( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1f( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.1c( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.554998398s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.353668213s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.554976463s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.353668213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561773300s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360542297s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561741829s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360542297s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.562335014s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361251831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.562321663s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361251831s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561624527s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360671997s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561611176s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360671997s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561557770s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360694885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561546326s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360694885s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561418533s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360733032s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561405182s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360733032s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561334610s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360755920s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561320305s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360755920s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561245918s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360763550s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561233521s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360763550s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561607361s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361213684s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561594963s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361213684s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561145782s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.360832214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561132431s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.360832214s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561257362s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361030579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561244011s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361030579s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561180115s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361106873s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561167717s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361106873s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561090469s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361122131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561079025s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361122131s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561025620s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361145020s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.561012268s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361145020s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560912132s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361167908s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560898781s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361167908s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560844421s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361190796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560832977s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361190796s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560770988s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active pruub 101.361206055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.560759544s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 101.361206055s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:07 np0005592158 ceph-mgr[82073]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:36:07 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-1-hzmatt[82069]: 2026-01-22T13:36:07.792+0000 7fb431a22140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:36:07 np0005592158 ceph-mgr[82073]: ms_deliver_dispatch: unhandled message 0x562bbfe2f600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Jan 22 08:36:07 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:08 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:09 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:10 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:10 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 22 08:36:10 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 e43: 3 total, 2 up, 3 in
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1b( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=42/43 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.10( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1c( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.1f( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:12 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 22 08:36:12 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 08:36:13 np0005592158 ceph-mon[81715]: Deploying daemon osd.2 on compute-2
Jan 22 08:36:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:17 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Jan 22 08:36:17 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Jan 22 08:36:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 22 08:36:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 22 08:36:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 22 08:36:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 22 08:36:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 22 08:36:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 22 08:36:25 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 22 08:36:25 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 22 08:36:26 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Jan 22 08:36:26 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Jan 22 08:36:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:28 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 22 08:36:28 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 22 08:36:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:30 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 22 08:36:30 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 22 08:36:31 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 22 08:36:31 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 22 08:36:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e44 e44: 3 total, 2 up, 3 in
Jan 22 08:36:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592158 ceph-mon[81715]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:32 np0005592158 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 22 08:36:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e45 e45: 3 total, 2 up, 3 in
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.653626442s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.779006958s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.653626442s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.779006958s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794871330s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920425415s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.259327888s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384948730s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794871330s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.259327888s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384948730s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235723495s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361526489s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235723495s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235956192s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361801147s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235956192s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258738518s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384674072s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258738518s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384674072s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794334412s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920425415s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794334412s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258543968s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384719849s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.258543968s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384719849s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794472694s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920593262s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794253349s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920516968s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794472694s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.794253349s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652106285s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778671265s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652234077s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778808594s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652106285s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778671265s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235224724s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.361862183s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652074814s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778793335s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257728577s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384506226s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652234077s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778808594s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.651782990s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.778671265s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.651782990s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778671265s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.235224724s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.652074814s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.778793335s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257728577s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384506226s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793506622s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920608521s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257336617s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384475708s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793506622s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257034302s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.384429932s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257034302s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384429932s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647268295s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774719238s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647268295s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774719238s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.248373032s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 130.375930786s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.248373032s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.375930786s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647026062s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774658203s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.647026062s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774658203s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=45 pruub=11.257336617s) [] r=-1 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.384475708s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793272972s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 active pruub 130.920959473s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.646950722s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 128.774719238s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=9.646950722s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.774719238s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.234349251s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 133.362182617s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=14.234349251s) [] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=45 pruub=11.793272972s) [] r=-1 lpr=45 pi=[28,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:33 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 22 08:36:33 np0005592158 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 08:36:33 np0005592158 ceph-mon[81715]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:33 np0005592158 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592158 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 08:36:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 22 08:36:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 22 08:36:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:37 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 22 08:36:37 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 22 08:36:39 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 22 08:36:39 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 22 08:36:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e46 e46: 3 total, 2 up, 3 in
Jan 22 08:36:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:40 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Jan 22 08:36:40 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Jan 22 08:36:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e47 e47: 3 total, 2 up, 3 in
Jan 22 08:36:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:41 np0005592158 podman[82248]: 2026-01-22 13:36:41.909849263 +0000 UTC m=+0.051807796 container create 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:41 np0005592158 systemd[1]: Started libpod-conmon-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope.
Jan 22 08:36:41 np0005592158 podman[82248]: 2026-01-22 13:36:41.886012212 +0000 UTC m=+0.027970765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:41 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:36:41 np0005592158 podman[82248]: 2026-01-22 13:36:41.993381896 +0000 UTC m=+0.135340439 container init 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 08:36:42 np0005592158 podman[82248]: 2026-01-22 13:36:42.001108898 +0000 UTC m=+0.143067431 container start 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:42 np0005592158 podman[82248]: 2026-01-22 13:36:42.005240071 +0000 UTC m=+0.147198584 container attach 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:42 np0005592158 reverent_jones[82264]: 167 167
Jan 22 08:36:42 np0005592158 systemd[1]: libpod-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope: Deactivated successfully.
Jan 22 08:36:42 np0005592158 conmon[82264]: conmon 1ec786deb05b30425d0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope/container/memory.events
Jan 22 08:36:42 np0005592158 podman[82248]: 2026-01-22 13:36:42.010226137 +0000 UTC m=+0.152184660 container died 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:36:42 np0005592158 systemd[1]: var-lib-containers-storage-overlay-7b2d05f86c171946c6d6de1189602d9516b6029f065f02e6c6bd13ed83c9e5e5-merged.mount: Deactivated successfully.
Jan 22 08:36:42 np0005592158 podman[82248]: 2026-01-22 13:36:42.050909619 +0000 UTC m=+0.192868142 container remove 1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:42 np0005592158 systemd[1]: libpod-conmon-1ec786deb05b30425d0c360e8fd0d420dfb29343807c487b4a1ccb3d429d8d81.scope: Deactivated successfully.
Jan 22 08:36:42 np0005592158 systemd[1]: Reloading.
Jan 22 08:36:42 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:42 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:42 np0005592158 systemd[1]: Reloading.
Jan 22 08:36:42 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:42 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:42 np0005592158 systemd[1]: Starting Ceph rgw.rgw.compute-1.thdhdp for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592158 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 08:36:42 np0005592158 podman[82407]: 2026-01-22 13:36:42.937288093 +0000 UTC m=+0.050458330 container create 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 08:36:42 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:42 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:42 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:42 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391ccc110e8b98a4ae90a6485f77f17dd40862bd583516c626b2f26638845de/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.thdhdp supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:42 np0005592158 podman[82407]: 2026-01-22 13:36:42.994796255 +0000 UTC m=+0.107966492 container init 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:43 np0005592158 podman[82407]: 2026-01-22 13:36:43.002248328 +0000 UTC m=+0.115418545 container start 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:36:43 np0005592158 bash[82407]: 23102aca31774d35fb66e5a0ea310071b7f3d8f6b2965c50c70b36b8efad689e
Jan 22 08:36:43 np0005592158 podman[82407]: 2026-01-22 13:36:42.916039553 +0000 UTC m=+0.029209800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:43 np0005592158 systemd[1]: Started Ceph rgw.rgw.compute-1.thdhdp for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:43 np0005592158 radosgw[82426]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:43 np0005592158 radosgw[82426]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 22 08:36:43 np0005592158 radosgw[82426]: framework: beast
Jan 22 08:36:43 np0005592158 radosgw[82426]: framework conf key: endpoint, val: 192.168.122.101:8082
Jan 22 08:36:43 np0005592158 radosgw[82426]: init_numa not setting numa affinity
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e48 e48: 3 total, 2 up, 3 in
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592158 ceph-mon[81715]: Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 08:36:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 08:36:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 08:36:44 np0005592158 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589950562s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589828014s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361526489s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.18( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148709118s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.1a( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148670778s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148662463s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.1f( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148639068s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920593262s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.12( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148291469s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148260996s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920425415s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148221418s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.f( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589405537s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.148026794s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920516968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.589246511s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361801147s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.4( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588890314s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.1c( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.b( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147798270s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147768766s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920959473s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147391483s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=28/29 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49 pruub=0.147337750s) [2] r=-1 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 130.920608521s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[2.1d( empty local-lis/les=20/22 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=-1 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[5.e( empty local-lis/les=42/43 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[4.15( empty local-lis/les=42/43 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=-1 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588813782s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.361862183s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588687897s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=2.588541985s) [2] r=-1 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 133.362182617s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 22 08:36:45 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 22 08:36:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 22 08:36:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:47 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 22 08:36:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 08:36:47 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 22 08:36:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 08:36:47 np0005592158 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:48 np0005592158 ceph-mon[81715]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:36:48 np0005592158 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 08:36:48 np0005592158 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 08:36:49 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 22 08:36:49 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 22 08:36:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 22 08:36:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e3 new map
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zycvef{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e4 new map
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:51.171709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: Cluster is now healthy
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e5 new map
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 08:36:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:52 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 22 08:36:52 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e6 new map
Jan 22 08:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 08:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e7 new map
Jan 22 08:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:55 np0005592158 ceph-mon[81715]: Cluster is now healthy
Jan 22 08:36:55 np0005592158 podman[82637]: 2026-01-22 13:36:55.953638598 +0000 UTC m=+0.046689207 container create a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 08:36:56 np0005592158 systemd[1]: Started libpod-conmon-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope.
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:55.931922764 +0000 UTC m=+0.024973403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:56 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:56.060999282 +0000 UTC m=+0.154049911 container init a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:56.068481227 +0000 UTC m=+0.161531836 container start a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:56.072446205 +0000 UTC m=+0.165496824 container attach a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:56 np0005592158 angry_cannon[82653]: 167 167
Jan 22 08:36:56 np0005592158 systemd[1]: libpod-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope: Deactivated successfully.
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:56.076110494 +0000 UTC m=+0.169161093 container died a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:56 np0005592158 systemd[1]: var-lib-containers-storage-overlay-4ded85dc22e22ea18daa32eeb784729b41272a201922c487dd9028c6e2c71d72-merged.mount: Deactivated successfully.
Jan 22 08:36:56 np0005592158 podman[82637]: 2026-01-22 13:36:56.11510316 +0000 UTC m=+0.208153769 container remove a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:56 np0005592158 systemd[1]: libpod-conmon-a3759077615f2675a6e14efb42389cc3b9afa9d7302414270e198d6d75eb6003.scope: Deactivated successfully.
Jan 22 08:36:56 np0005592158 systemd[1]: Reloading.
Jan 22 08:36:56 np0005592158 radosgw[82426]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-1-thdhdp[82422]: 2026-01-22T13:36:56.183+0000 7fdce17b9940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592158 radosgw[82426]: framework: beast
Jan 22 08:36:56 np0005592158 radosgw[82426]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 08:36:56 np0005592158 radosgw[82426]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 08:36:56 np0005592158 radosgw[82426]: starting handler: beast
Jan 22 08:36:56 np0005592158 radosgw[82426]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:56 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:56 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:56 np0005592158 radosgw[82426]: mgrc service_daemon_register rgw.24134 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.thdhdp,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9ef52632-dffc-43fe-ad78-aca5b0d3574d,zone_name=default,zonegroup_id=961906d1-4e51-43eb-bd43-c4a4ab081aea,zonegroup_name=default}
Jan 22 08:36:56 np0005592158 systemd[1]: Reloading.
Jan 22 08:36:56 np0005592158 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:56 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:56 np0005592158 systemd[1]: Starting Ceph mds.cephfs.compute-1.ofmmzj for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:56 np0005592158 radosgw[82426]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:57 np0005592158 podman[83338]: 2026-01-22 13:36:56.998861024 +0000 UTC m=+0.040568211 container create 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 08:36:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:57 np0005592158 ceph-mon[81715]: Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 08:36:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:36:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae761a8bd634a2930add77d124704061b535378ac98230c3bfea60d4f94dc62c/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.ofmmzj supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:57 np0005592158 podman[83338]: 2026-01-22 13:36:57.067993472 +0000 UTC m=+0.109700679 container init 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 22 08:36:57 np0005592158 podman[83338]: 2026-01-22 13:36:57.073156564 +0000 UTC m=+0.114863751 container start 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:57 np0005592158 bash[83338]: 8dd280a87453c9cd6a0d5909da93b71a91fc226820f3456e2c4ccfd46343a14c
Jan 22 08:36:57 np0005592158 podman[83338]: 2026-01-22 13:36:56.98188134 +0000 UTC m=+0.023588547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:57 np0005592158 systemd[1]: Started Ceph mds.cephfs.compute-1.ofmmzj for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:57 np0005592158 ceph-mds[83358]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:57 np0005592158 ceph-mds[83358]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 22 08:36:57 np0005592158 ceph-mds[83358]: main not setting numa affinity
Jan 22 08:36:57 np0005592158 ceph-mds[83358]: pidfile_write: ignore empty --pid-file
Jan 22 08:36:57 np0005592158 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-1-ofmmzj[83354]: starting mds.cephfs.compute-1.ofmmzj at 
Jan 22 08:36:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 08:36:58 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 7 from mon.2
Jan 22 08:36:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 22 08:36:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 22 08:36:59 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Jan 22 08:36:59 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e8 new map
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 08:37:00 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 8 from mon.2
Jan 22 08:37:00 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Monitors have assigned me to become a standby.
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 22 08:37:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 08:37:03 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Jan 22 08:37:03 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e9 new map
Jan 22 08:37:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:03 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Updating MDS map to version 9 from mon.2
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 22 08:37:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 22 08:37:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 08:37:05 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 60 pg[10.0( v 58'96 (0'0,58'96] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.265120506s) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 58'95 active pruub 158.687652588s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:05 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 60 pg[10.0( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=8.265120506s) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 0'0 unknown pruub 158.687652588s@ mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e10 new map
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.11( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.7( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1b( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.17( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.13( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.12( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.10( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1f( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1e( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1d( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1c( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1a( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.19( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.18( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.6( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.5( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.4( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.b( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.8( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.a( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.c( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.d( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.f( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.3( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.14( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.15( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.e( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.16( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.9( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.2( v 58'96 lc 0'0 (0'0,58'96] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.17( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1d( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1a( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1c( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.6( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.a( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.c( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.d( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.0( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 58'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.3( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.14( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.15( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.9( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.16( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.7( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:07 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 61 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:08 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 22 08:37:08 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 22 08:37:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:37:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:37:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 08:37:12 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.867439270s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.332916260s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.867372513s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.332916260s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.872615814s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338287354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1b( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.872380257s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338287354s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871800423s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337860107s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871774673s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337860107s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871617317s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337936401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871500015s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.337844849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871585846s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337936401s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871476173s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.337844849s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871587753s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338043213s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.19( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871541023s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338043213s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871558189s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338073730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.18( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871539116s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338073730s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871469498s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338134766s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.5( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871451378s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338134766s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871421814s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338119507s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871317863s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338119507s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871380806s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338195801s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.8( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871359825s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338195801s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871323586s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338302612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871201515s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338027954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871301651s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338302612s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.3( v 61'99 (0'0,61'99] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871238708s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338348389s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.13( v 58'96 (0'0,58'96] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870937347s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338027954s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.15( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871099472s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338363647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.15( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871060371s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338363647s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871049881s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338455200s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871059418s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active pruub 168.338485718s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871023178s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338455200s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.2( v 58'96 (0'0,58'96] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871041298s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.338485718s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.14( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870669365s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 61'98 active pruub 168.338348389s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.3( v 61'99 (0'0,61'99] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.871191978s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338348389s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[10.14( v 61'99 (0'0,61'99] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=9.870625496s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=58'96 lcod 61'98 mlcod 0'0 unknown NOTIFY pruub 168.338348389s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.1b( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.8( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.14( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.10( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.19( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.12( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.17( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.18( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 62 pg[8.4( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 08:37:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:18 np0005592158 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 08:37:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.f( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.17( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.14( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.4( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.5( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1( v 58'2 (0'0,58'2] local-lis/les=62/63 n=1 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.4( v 48'8 (0'0,48'8] local-lis/les=62/63 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.7( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.1b( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.18( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1b( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1d( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.8( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.12( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1c( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.10( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1e( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.12( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.14( v 48'8 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[11.1a( v 58'2 (0'0,58'2] local-lis/les=62/63 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [1] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:19 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 63 pg[8.19( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/63 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:20.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 22 08:37:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 22 08:37:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:22.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 08:37:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 08:37:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 22 08:37:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 22 08:37:24 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Jan 22 08:37:24 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Jan 22 08:37:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:24 np0005592158 ceph-mon[81715]: Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 08:37:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592158 systemd-logind[787]: New session 33 of user zuul.
Jan 22 08:37:25 np0005592158 systemd[1]: Started Session 33 of User zuul.
Jan 22 08:37:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:26 np0005592158 python3.9[83531]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 22 08:37:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 22 08:37:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:28 np0005592158 python3.9[83745]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:28.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:29 np0005592158 ceph-mon[81715]: Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:29 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Jan 22 08:37:29 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Jan 22 08:37:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:29.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:31 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Jan 22 08:37:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:31 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Jan 22 08:37:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:32.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:32 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 22 08:37:33 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:34.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 08:37:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 22 08:37:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 22 08:37:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 08:37:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 08:37:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 08:37:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 08:37:37 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:37.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 22 08:37:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 22 08:37:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 08:37:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:39 np0005592158 systemd[1]: session-33.scope: Deactivated successfully.
Jan 22 08:37:39 np0005592158 systemd[1]: session-33.scope: Consumed 8.941s CPU time.
Jan 22 08:37:39 np0005592158 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Jan 22 08:37:39 np0005592158 systemd-logind[787]: Removed session 33.
Jan 22 08:37:39 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 22 08:37:39 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 22 08:37:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 08:37:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 08:37:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:40.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:40 np0005592158 ceph-mon[81715]: Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 08:37:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 08:37:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 08:37:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:37:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:37:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:45.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:45 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 08:37:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 3314933000854323200, adjusting msgr requires
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 75 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 75 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 75 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 75 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:47 np0005592158 podman[84020]: 2026-01-22 13:37:47.458021012 +0000 UTC m=+0.061207719 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 08:37:47 np0005592158 podman[84020]: 2026-01-22 13:37:47.552384675 +0000 UTC m=+0.155571362 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:37:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 76 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [1]/[0] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=62'697 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=0/0 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'697 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=61'698 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=61'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 77 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=61'698 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=61'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=62'703 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=62'703 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.6( v 61'698 (0'0,61'698] local-lis/les=77/78 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=61'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'697 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.e( v 62'695 (0'0,62'695] local-lis/les=77/78 n=6 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 78 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=75/59 les/c/f=76/60/0 sis=77) [1] r=0 lpr=77 pi=[59,77)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:51.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:52.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:53 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 22 08:37:53 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 22 08:37:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:53.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:55 np0005592158 systemd-logind[787]: New session 34 of user zuul.
Jan 22 08:37:55 np0005592158 systemd[1]: Started Session 34 of User zuul.
Jan 22 08:37:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:55.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 08:37:55 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 79 pg[9.12( v 61'698 (0'0,61'698] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=61'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:55 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 79 pg[9.1( v 62'703 (0'0,62'703] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=62'703 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:55 np0005592158 python3.9[84427]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:37:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:56 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:57 np0005592158 python3.9[84601]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 08:37:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:57.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:58 np0005592158 python3.9[84757]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:37:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:59 np0005592158 python3.9[84910]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:38:00 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 22 08:38:00 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 22 08:38:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:00.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 08:38:01 np0005592158 python3.9[85064]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:01.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:01 np0005592158 python3.9[85216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 22 08:38:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 22 08:38:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:03 np0005592158 python3.9[85366]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:38:03 np0005592158 network[85383]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:38:03 np0005592158 network[85384]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:38:03 np0005592158 network[85385]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:38:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 08:38:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 22 08:38:04 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 22 08:38:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 08:38:05 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Jan 22 08:38:05 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:06.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 08:38:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:08.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 08:38:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:08 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Jan 22 08:38:09 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Jan 22 08:38:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:09 np0005592158 python3.9[85720]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:38:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:38:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:38:10 np0005592158 python3.9[86192]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 08:38:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:11.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:11 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Jan 22 08:38:11 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Jan 22 08:38:12 np0005592158 python3.9[86795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592158 python3.9[86953]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:38:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:14 np0005592158 python3.9[87037]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:38:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:15 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 22 08:38:15 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 22 08:38:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:17.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:17 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 22 08:38:17 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 22 08:38:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:18 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 22 08:38:18 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 22 08:38:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:19.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 08:38:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.c deep-scrub starts
Jan 22 08:38:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.c deep-scrub ok
Jan 22 08:38:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 08:38:21 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 89 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:21 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 89 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1] r=0 lpr=89 pi=[59,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 22 08:38:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 08:38:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 08:38:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 90 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[59,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:22 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 22 08:38:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 22 08:38:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:24 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 08:38:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 08:38:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:24.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:24 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 22 08:38:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 08:38:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=0/0 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=62'714 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=0/0 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=62'714 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 luod=0'0 crt=61'690 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 92 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=0/0 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=61'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:25 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 22 08:38:26 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 08:38:26 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 93 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=61'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 93 pg[9.a( v 62'714 (0'0,62'714] local-lis/les=92/93 n=9 ec=59/49 lis/c=90/59 les/c/f=91/60/0 sis=92) [1] r=0 lpr=92 pi=[59,92)/1 crt=62'714 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 08:38:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 08:38:28 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 94 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94) [1] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:28 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 94 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94) [1] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:29 np0005592158 systemd[72521]: Created slice User Background Tasks Slice.
Jan 22 08:38:29 np0005592158 systemd[72521]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 08:38:30 np0005592158 systemd[72521]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:38:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:30 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 08:38:31 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:31 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:31 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:31 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 95 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: Reconfiguring osd.0 (monmap changed)...
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: Reconfiguring daemon osd.0 on compute-0
Jan 22 08:38:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:38:32 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Jan 22 08:38:32 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Jan 22 08:38:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 08:38:32 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 96 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96) [1] r=0 lpr=96 pi=[70,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:32 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 96 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96) [1] r=0 lpr=96 pi=[70,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.678991876 +0000 UTC m=+0.043809756 container create 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:38:33 np0005592158 systemd[1]: Started libpod-conmon-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope.
Jan 22 08:38:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:33 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.660153384 +0000 UTC m=+0.024971284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.757867333 +0000 UTC m=+0.122685233 container init 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.765212107 +0000 UTC m=+0.130029987 container start 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.768317953 +0000 UTC m=+0.133135893 container attach 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 22 08:38:33 np0005592158 modest_liskov[87337]: 167 167
Jan 22 08:38:33 np0005592158 systemd[1]: libpod-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope: Deactivated successfully.
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.772467429 +0000 UTC m=+0.137285309 container died 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:33 np0005592158 systemd[1]: var-lib-containers-storage-overlay-d0e00fd8d55eb3c21c044fdddbae21f6ea7c462e97840cba23b1aafb8201b7ad-merged.mount: Deactivated successfully.
Jan 22 08:38:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:33.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:33 np0005592158 podman[87321]: 2026-01-22 13:38:33.81180902 +0000 UTC m=+0.176626900 container remove 27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:38:33 np0005592158 systemd[1]: libpod-conmon-27de453c76f35b3b22e49edf2f522f7ed54b853b3bcde504c86943f73df5fc26.scope: Deactivated successfully.
Jan 22 08:38:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:35 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 22 08:38:35 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 22 08:38:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:35.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:36 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 97 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:36 np0005592158 ceph-mon[81715]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 08:38:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:36.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.762488168 +0000 UTC m=+0.042380416 container create ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 08:38:36 np0005592158 systemd[1]: Started libpod-conmon-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope.
Jan 22 08:38:36 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.743336697 +0000 UTC m=+0.023228965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.84115618 +0000 UTC m=+0.121048448 container init ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.847432014 +0000 UTC m=+0.127324262 container start ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.853532973 +0000 UTC m=+0.133425241 container attach ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:38:36 np0005592158 condescending_wright[87494]: 167 167
Jan 22 08:38:36 np0005592158 systemd[1]: libpod-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope: Deactivated successfully.
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.856221137 +0000 UTC m=+0.136113385 container died ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:36 np0005592158 systemd[1]: var-lib-containers-storage-overlay-d7a9fbfaca9ec1bcbb7ca7623cc62de79d560ca7da0785484a7d63a764ac740f-merged.mount: Deactivated successfully.
Jan 22 08:38:36 np0005592158 podman[87478]: 2026-01-22 13:38:36.902048639 +0000 UTC m=+0.181940887 container remove ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:38:36 np0005592158 systemd[1]: libpod-conmon-ec4400d9fea5150246336f7334998f64bcce11b0c961963dbd5a613ff33f7a3a.scope: Deactivated successfully.
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: Reconfiguring osd.1 (monmap changed)...
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: Reconfiguring daemon osd.1 on compute-1
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.564692557 +0000 UTC m=+0.037823419 container create 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:37 np0005592158 systemd[1]: Started libpod-conmon-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope.
Jan 22 08:38:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 luod=0'0 crt=62'704 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:37 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:37 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 99 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=98/99 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98) [1] r=0 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.547895941 +0000 UTC m=+0.021026833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.645085557 +0000 UTC m=+0.118216449 container init 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.651153295 +0000 UTC m=+0.124284167 container start 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 22 08:38:37 np0005592158 trusting_driscoll[87652]: 167 167
Jan 22 08:38:37 np0005592158 systemd[1]: libpod-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope: Deactivated successfully.
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.655680121 +0000 UTC m=+0.128811033 container attach 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.656627107 +0000 UTC m=+0.129757969 container died 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:37 np0005592158 systemd[1]: var-lib-containers-storage-overlay-4813cf314fe975fdd84ef3949dd6a8999775a1f143ce594ea00f0239ce7008b3-merged.mount: Deactivated successfully.
Jan 22 08:38:37 np0005592158 podman[87636]: 2026-01-22 13:38:37.691403652 +0000 UTC m=+0.164534514 container remove 9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:38:37 np0005592158 systemd[1]: libpod-conmon-9b45c5bdcf5222c30648a3265214f863ad0a02727174bfea3bb5a9800049020b.scope: Deactivated successfully.
Jan 22 08:38:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:38.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 08:38:38 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 100 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:38 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 100 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=99/100 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99) [1] r=0 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:39 np0005592158 podman[87857]: 2026-01-22 13:38:39.317572464 +0000 UTC m=+0.070260779 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:38:39 np0005592158 ceph-mon[81715]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 08:38:39 np0005592158 ceph-mon[81715]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 08:38:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:39 np0005592158 podman[87857]: 2026-01-22 13:38:39.424829129 +0000 UTC m=+0.177517404 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:38:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:38:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:38:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:41 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 22 08:38:41 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 22 08:38:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:42.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 22 08:38:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 22 08:38:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 08:38:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 08:38:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 101 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101) [1] r=0 lpr=101 pi=[59,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 08:38:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 08:38:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 102 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[59,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 22 08:38:46 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 22 08:38:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 08:38:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 08:38:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 103 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=103) [1] r=0 lpr=103 pi=[59,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 08:38:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=0/0 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:47 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 104 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=104) [1]/[0] r=-1 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 08:38:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 08:38:48 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 105 pg[9.10( v 58'684 (0'0,58'684] local-lis/les=104/105 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 22 08:38:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 22 08:38:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 22 08:38:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 22 08:38:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 08:38:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=0/0 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 luod=0'0 crt=62'701 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 106 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=0/0 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 crt=62'701 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 22 08:38:52 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 22 08:38:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 08:38:52 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 107 pg[9.11( v 62'701 (0'0,62'701] local-lis/les=106/107 n=5 ec=59/49 lis/c=104/59 les/c/f=105/60/0 sis=106) [1] r=0 lpr=106 pi=[59,106)/1 crt=62'701 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:38:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:38:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 08:38:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 08:38:55 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Jan 22 08:38:55 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Jan 22 08:38:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 08:38:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:55.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 08:38:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 08:38:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 08:38:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:57 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 22 08:38:57 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 22 08:38:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 08:38:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 08:38:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 22 08:38:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 22 08:38:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 08:38:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:38:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:59.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 08:39:00 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111) [1] r=0 lpr=111 pi=[72,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 08:39:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 08:39:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:01 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 112 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 08:39:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 22 08:39:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 22 08:39:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 113 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=8.887313843s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=58'684 mlcod 0'0 active pruub 276.769866943s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 113 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=8.887105942s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=58'684 mlcod 0'0 unknown NOTIFY pruub 276.769866943s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 114 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:02 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 114 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=77/78 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 08:39:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:03.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:39:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:39:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:05.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:06 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 22 08:39:06 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 22 08:39:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:39:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:09.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:39:10 np0005592158 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 08:39:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 08:39:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:11 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 115 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] async=[2] r=0 lpr=114 pi=[77,114)/1 crt=58'684 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:11.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 08:39:12 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.448846817s) [2] async=[2] r=-1 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 58'684 active pruub 292.724243164s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:12 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=114/115 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.448719978s) [2] r=-1 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 unknown NOTIFY pruub 292.724243164s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:12 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 116 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=115/116 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115) [1] r=0 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 22 08:39:13 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 22 08:39:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:13.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:14 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 22 08:39:14 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 22 08:39:14 np0005592158 python3.9[88190]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:39:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 08:39:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:15.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:16 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 22 08:39:16 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 22 08:39:16 np0005592158 python3.9[88477]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:39:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592158 python3.9[88629]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:39:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 08:39:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 08:39:17 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:17.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:18 np0005592158 python3.9[88781]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:19 np0005592158 python3.9[88933]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:39:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 08:39:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 22 08:39:19 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 22 08:39:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:19.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 08:39:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 08:39:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:21 np0005592158 python3.9[89085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 22 08:39:21 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 22 08:39:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:21 np0005592158 python3.9[89237]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 08:39:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 08:39:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:22 np0005592158 python3.9[89315]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 08:39:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 08:39:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 22 08:39:23 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 22 08:39:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 08:39:23 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 122 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=122 pruub=14.605878830s) [0] r=-1 lpr=122 pi=[92,122)/1 crt=61'690 mlcod 0'0 active pruub 303.567810059s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:23 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 122 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=122 pruub=14.605822563s) [0] r=-1 lpr=122 pi=[92,122)/1 crt=61'690 mlcod 0'0 unknown NOTIFY pruub 303.567810059s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:24 np0005592158 python3.9[89467]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 08:39:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 08:39:25 np0005592158 python3.9[89621]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:39:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 08:39:26 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 123 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:26 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 123 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=92/93 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:26 np0005592158 python3.9[89774]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:39:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592158 python3.9[89927]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:39:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 22 08:39:27 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 22 08:39:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 125 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=123) [0]/[1] async=[0] r=0 lpr=123 pi=[92,123)/1 crt=61'690 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:28 np0005592158 python3.9[90079]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:39:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 08:39:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 08:39:29 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126 pruub=14.590867043s) [0] async=[0] r=-1 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 61'690 active pruub 308.968566895s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:29 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 126 pg[9.1a( v 61'690 (0'0,61'690] local-lis/les=123/125 n=4 ec=59/49 lis/c=123/92 les/c/f=125/93/0 sis=126 pruub=14.590797424s) [0] r=-1 lpr=126 pi=[92,126)/1 crt=61'690 mlcod 0'0 unknown NOTIFY pruub 308.968566895s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:29 np0005592158 python3.9[90231]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 22 08:39:29 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 22 08:39:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:39:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:29.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:39:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 08:39:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:30.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 08:39:31 np0005592158 python3.9[90384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:32 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Jan 22 08:39:32 np0005592158 python3.9[90536]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:32 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Jan 22 08:39:32 np0005592158 python3.9[90614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:33 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 22 08:39:33 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 22 08:39:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:33 np0005592158 python3.9[90766]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:34 np0005592158 python3.9[90844]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:39:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:39:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:35 np0005592158 python3.9[90996]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.383701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176383839, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7331, "num_deletes": 255, "total_data_size": 14116716, "memory_usage": 14338576, "flush_reason": "Manual Compaction"}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 22 08:39:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176449343, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8798928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7336, "table_properties": {"data_size": 8768075, "index_size": 20178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92032, "raw_average_key_size": 24, "raw_value_size": 8693720, "raw_average_value_size": 2268, "num_data_blocks": 884, "num_entries": 3832, "num_filter_entries": 3832, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 1769088931, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 65713 microseconds, and 18588 cpu microseconds.
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.449428) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8798928 bytes OK
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.449457) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462153) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462210) EVENT_LOG_v1 {"time_micros": 1769089176462198, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.462239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14075848, prev total WAL file size 14075848, number of live WAL files 2.
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.465959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8592KB) 8(1648B)]
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176466106, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8800576, "oldest_snapshot_seqno": -1}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3581 keys, 8795436 bytes, temperature: kUnknown
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176528899, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8795436, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8765249, "index_size": 20157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 87854, "raw_average_key_size": 24, "raw_value_size": 8694000, "raw_average_value_size": 2427, "num_data_blocks": 884, "num_entries": 3581, "num_filter_entries": 3581, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.529210) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8795436 bytes
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.530365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.0 rd, 139.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.4, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3837, records dropped: 256 output_compression: NoCompression
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.530391) EVENT_LOG_v1 {"time_micros": 1769089176530379, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62877, "compaction_time_cpu_micros": 19059, "output_level": 6, "num_output_files": 1, "total_output_size": 8795436, "num_input_records": 3837, "num_output_records": 3581, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176531913, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176531965, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 08:39:36 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:39:36.465789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:39:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:37 np0005592158 python3.9[91148]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592158 python3.9[91300]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:39:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 22 08:39:38 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 22 08:39:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:39 np0005592158 python3.9[91450]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 08:39:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 08:39:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 08:39:40 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 130 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130 pruub=8.937850952s) [2] r=-1 lpr=130 pi=[98,130)/1 crt=62'695 mlcod 0'0 active pruub 314.922821045s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:40 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 130 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130 pruub=8.937788010s) [2] r=-1 lpr=130 pi=[98,130)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 314.922821045s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:41 np0005592158 python3.9[91602]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:41 np0005592158 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592158 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:39:41 np0005592158 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:39:41 np0005592158 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592158 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:39:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 08:39:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 08:39:42 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 131 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:42 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 131 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=98/99 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:42.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:42 np0005592158 python3.9[91765]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:39:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 08:39:43 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 132 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] async=[2] r=0 lpr=131 pi=[98,131)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 22 08:39:43 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 22 08:39:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 08:39:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 08:39:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=133 pruub=15.142436028s) [0] r=-1 lpr=133 pi=[77,133)/1 crt=62'697 mlcod 0'0 active pruub 324.771209717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133 pruub=14.980648041s) [2] async=[2] r=-1 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 62'695 active pruub 324.609436035s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=133 pruub=15.142356873s) [0] r=-1 lpr=133 pi=[77,133)/1 crt=62'697 mlcod 0'0 unknown NOTIFY pruub 324.771209717s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:44 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=131/132 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133 pruub=14.980477333s) [2] r=-1 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 324.609436035s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 08:39:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 08:39:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 134 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:45 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 134 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=77/78 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:45 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 22 08:39:45 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 22 08:39:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:39:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 08:39:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=135 pruub=12.084430695s) [0] r=-1 lpr=135 pi=[99,135)/1 crt=62'695 mlcod 0'0 active pruub 323.938873291s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=135 pruub=12.084036827s) [0] r=-1 lpr=135 pi=[99,135)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 323.938873291s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:46 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 135 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[77,134)/1 crt=62'697 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:47 np0005592158 python3.9[91917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:47 np0005592158 python3.9[92071]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:48.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:48 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 22 08:39:48 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 22 08:39:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:39:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136 pruub=13.113392830s) [0] async=[0] r=-1 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 62'697 active pruub 327.872009277s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1e( v 62'697 (0'0,62'697] local-lis/les=134/135 n=5 ec=59/49 lis/c=134/77 les/c/f=135/78/0 sis=136 pruub=13.113287926s) [0] r=-1 lpr=136 pi=[77,136)/1 crt=62'697 mlcod 0'0 unknown NOTIFY pruub 327.872009277s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 136 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=99/100 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Jan 22 08:39:49 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Jan 22 08:39:49 np0005592158 podman[92268]: 2026-01-22 13:39:49.898072535 +0000 UTC m=+0.070932458 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:39:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:50 np0005592158 podman[92268]: 2026-01-22 13:39:50.018422233 +0000 UTC m=+0.191282156 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:39:50 np0005592158 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Jan 22 08:39:50 np0005592158 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 08:39:50 np0005592158 systemd[1]: session-34.scope: Consumed 1min 7.327s CPU time.
Jan 22 08:39:50 np0005592158 systemd-logind[787]: Removed session 34.
Jan 22 08:39:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 08:39:50 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 137 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=99/99 les/c/f=100/100/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[99,136)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 22 08:39:50 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 08:39:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138 pruub=14.980248451s) [0] async=[0] r=-1 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 62'695 active pruub 331.834564209s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:51 np0005592158 ceph-osd[79044]: osd.1 pg_epoch: 138 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=136/137 n=5 ec=59/49 lis/c=136/99 les/c/f=137/100/0 sis=138 pruub=14.979649544s) [0] r=-1 lpr=138 pi=[99,138)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 331.834564209s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 22 08:39:51 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 22 08:39:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:52.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:39:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 08:39:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:53.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:54.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:55.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:55 np0005592158 systemd-logind[787]: New session 35 of user zuul.
Jan 22 08:39:55 np0005592158 systemd[1]: Started Session 35 of User zuul.
Jan 22 08:39:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:39:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:39:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:57 np0005592158 python3.9[92677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:39:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:57.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:58.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:58 np0005592158 python3.9[92833]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:39:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 22 08:39:58 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 22 08:39:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:59 np0005592158 python3.9[92986]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:39:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:39:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:00.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:00 np0005592158 python3.9[93120]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:40:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:01.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:02 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:02 np0005592158 python3.9[93276]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:04.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592158 python3.9[93429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:40:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:06 np0005592158 python3.9[93582]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:07 np0005592158 python3.9[93734]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:40:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:08 np0005592158 python3.9[93884]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:08.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:09 np0005592158 python3.9[94042]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:09.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:40:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:40:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:11 np0005592158 python3.9[94195]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:12.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:13 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 22 08:40:13 np0005592158 ceph-osd[79044]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 22 08:40:13 np0005592158 python3.9[94482]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:40:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:13.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:14.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:14 np0005592158 python3.9[94632]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:15 np0005592158 python3.9[94786]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:15.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:40:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:40:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592158 python3.9[94939]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:17.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:18.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:19 np0005592158 python3.9[95092]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:20.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:20 np0005592158 python3.9[95246]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 08:40:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:22 np0005592158 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 08:40:22 np0005592158 systemd[1]: session-35.scope: Consumed 18.631s CPU time.
Jan 22 08:40:22 np0005592158 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Jan 22 08:40:22 np0005592158 systemd-logind[787]: Removed session 35.
Jan 22 08:40:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:40:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:22.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:40:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:24.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592158 systemd-logind[787]: New session 36 of user zuul.
Jan 22 08:40:27 np0005592158 systemd[1]: Started Session 36 of User zuul.
Jan 22 08:40:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:28 np0005592158 python3.9[95424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:28 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:29 np0005592158 python3.9[95578]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:30 np0005592158 python3.9[95771]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:31 np0005592158 systemd[1]: session-36.scope: Deactivated successfully.
Jan 22 08:40:31 np0005592158 systemd[1]: session-36.scope: Consumed 2.427s CPU time.
Jan 22 08:40:31 np0005592158 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Jan 22 08:40:31 np0005592158 systemd-logind[787]: Removed session 36.
Jan 22 08:40:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:32.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:32.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:34.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592158 systemd-logind[787]: New session 37 of user zuul.
Jan 22 08:40:37 np0005592158 systemd[1]: Started Session 37 of User zuul.
Jan 22 08:40:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:38 np0005592158 python3.9[95950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:38.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:39 np0005592158 python3.9[96104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:40.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:40.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:40 np0005592158 python3.9[96260]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:41 np0005592158 python3.9[96344]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:42.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:42.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:43 np0005592158 python3.9[96497]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:44.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:44.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592158 python3.9[96692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592158 python3.9[96844]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:46.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:47 np0005592158 python3.9[97009]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:47 np0005592158 python3.9[97087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:48 np0005592158 python3.9[97239]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:48.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:48 np0005592158 python3.9[97317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:48 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:49 np0005592158 python3.9[97469]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:50 np0005592158 python3.9[97621]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:50.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:51 np0005592158 python3.9[97773]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:51 np0005592158 python3.9[97925]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:53 np0005592158 python3.9[98077]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:54.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:54.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:55 np0005592158 python3.9[98230]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:56 np0005592158 python3.9[98384]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:56.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:57 np0005592158 python3.9[98536]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:58.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:58 np0005592158 python3.9[98688]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:40:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:58.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:59 np0005592158 python3.9[98841]: ansible-service_facts Invoked
Jan 22 08:40:59 np0005592158 network[98858]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:40:59 np0005592158 network[98859]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:40:59 np0005592158 network[98860]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:41:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:00.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:00.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:00 np0005592158 podman[99068]: 2026-01-22 13:41:00.848129661 +0000 UTC m=+0.063039800 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:41:00 np0005592158 podman[99068]: 2026-01-22 13:41:00.945974549 +0000 UTC m=+0.160884698 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 22 08:41:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:02.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:41:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:04.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592158 python3.9[99730]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:41:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:06.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:08.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:08 np0005592158 python3.9[99883]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:41:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:08.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592158 python3.9[100035]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592158 python3.9[100113]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:10.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:10.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:10 np0005592158 python3.9[100315]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:11 np0005592158 python3.9[100393]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:12.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:12.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:13 np0005592158 python3.9[100545]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:14.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:15 np0005592158 python3.9[100697]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:41:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:16 np0005592158 python3.9[100782]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:17 np0005592158 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 08:41:17 np0005592158 systemd[1]: session-37.scope: Consumed 23.526s CPU time.
Jan 22 08:41:17 np0005592158 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Jan 22 08:41:17 np0005592158 systemd-logind[787]: Removed session 37.
Jan 22 08:41:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:18.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:22.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:22 np0005592158 systemd-logind[787]: New session 38 of user zuul.
Jan 22 08:41:22 np0005592158 systemd[1]: Started Session 38 of User zuul.
Jan 22 08:41:23 np0005592158 python3.9[100964]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:24.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:24 np0005592158 python3.9[101116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592158 python3.9[101194]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:25 np0005592158 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 08:41:25 np0005592158 systemd[1]: session-38.scope: Consumed 1.539s CPU time.
Jan 22 08:41:25 np0005592158 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Jan 22 08:41:25 np0005592158 systemd-logind[787]: Removed session 38.
Jan 22 08:41:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:26.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:28.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:28.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:30.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:31 np0005592158 systemd-logind[787]: New session 39 of user zuul.
Jan 22 08:41:31 np0005592158 systemd[1]: Started Session 39 of User zuul.
Jan 22 08:41:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:32 np0005592158 python3.9[101373]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:41:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:33 np0005592158 python3.9[101529]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:34 np0005592158 python3.9[101704]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:35 np0005592158 python3.9[101782]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ljcifukg recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592158 python3.9[101934]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592158 python3.9[102012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.0wtvek3s recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:38.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:38 np0005592158 python3.9[102164]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:38 np0005592158 python3.9[102316]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:39 np0005592158 python3.9[102394]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:40 np0005592158 python3.9[102546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:40 np0005592158 python3.9[102624]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:41 np0005592158 python3.9[102776]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:42 np0005592158 python3.9[102928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:43 np0005592158 python3.9[103006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:43 np0005592158 python3.9[103158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:44 np0005592158 python3.9[103236]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:45 np0005592158 python3.9[103388]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:45 np0005592158 systemd[1]: Reloading.
Jan 22 08:41:45 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:45 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:46.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:46 np0005592158 python3.9[103577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:47 np0005592158 python3.9[103655]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:41:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:41:48 np0005592158 python3.9[103807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592158 python3.9[103885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:49 np0005592158 python3.9[104037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:49 np0005592158 systemd[1]: Reloading.
Jan 22 08:41:49 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:49 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:49 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:50 np0005592158 systemd[1]: Starting Create netns directory...
Jan 22 08:41:50 np0005592158 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:41:50 np0005592158 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:41:50 np0005592158 systemd[1]: Finished Create netns directory.
Jan 22 08:41:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:50 np0005592158 python3.9[104231]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:41:51 np0005592158 network[104248]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:41:51 np0005592158 network[104249]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:41:51 np0005592158 network[104250]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:41:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592158 python3.9[104512]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:58 np0005592158 python3.9[104590]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:41:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:59 np0005592158 python3.9[104742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:00 np0005592158 python3.9[104894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:00 np0005592158 python3.9[104972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:02 np0005592158 python3.9[105124]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:42:02 np0005592158 systemd[1]: Starting Time & Date Service...
Jan 22 08:42:02 np0005592158 systemd[1]: Started Time & Date Service.
Jan 22 08:42:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:03 np0005592158 python3.9[105280]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:04 np0005592158 python3.9[105432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:05 np0005592158 python3.9[105510]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:06 np0005592158 python3.9[105662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:06 np0005592158 python3.9[105740]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xyla197f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:07 np0005592158 python3.9[105892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:08 np0005592158 python3.9[105970]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:42:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:42:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:08.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:09 np0005592158 python3.9[106122]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:10.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:10 np0005592158 python3[106275]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:42:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592158 podman[106597]: 2026-01-22 13:42:10.914717582 +0000 UTC m=+0.056298634 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:42:11 np0005592158 podman[106597]: 2026-01-22 13:42:11.0100404 +0000 UTC m=+0.151621452 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:42:11 np0005592158 python3.9[106596]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:11 np0005592158 python3.9[106762]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:13 np0005592158 python3.9[107076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592158 python3.9[107201]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089332.0151615-901-144889858026480/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:42:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:42:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:14 np0005592158 python3.9[107353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:15 np0005592158 python3.9[107431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592158 python3.9[107583]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.388376) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336388452, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2585, "num_deletes": 251, "total_data_size": 5255458, "memory_usage": 5338544, "flush_reason": "Manual Compaction"}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336407079, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3384523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7341, "largest_seqno": 9921, "table_properties": {"data_size": 3374668, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25170, "raw_average_key_size": 21, "raw_value_size": 3352581, "raw_average_value_size": 2826, "num_data_blocks": 255, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089177, "oldest_key_time": 1769089177, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 18772 microseconds, and 9009 cpu microseconds.
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.407164) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3384523 bytes OK
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.407188) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408891) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408908) EVENT_LOG_v1 {"time_micros": 1769089336408903, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.408928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5243460, prev total WAL file size 5243460, number of live WAL files 2.
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.410171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3305KB)], [15(8589KB)]
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336410300, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12179959, "oldest_snapshot_seqno": -1}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4244 keys, 10523929 bytes, temperature: kUnknown
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336500320, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10523929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10489598, "index_size": 22637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103740, "raw_average_key_size": 24, "raw_value_size": 10406832, "raw_average_value_size": 2452, "num_data_blocks": 980, "num_entries": 4244, "num_filter_entries": 4244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.500758) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10523929 bytes
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.502349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 4767, records dropped: 523 output_compression: NoCompression
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.502367) EVENT_LOG_v1 {"time_micros": 1769089336502358, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90223, "compaction_time_cpu_micros": 25243, "output_level": 6, "num_output_files": 1, "total_output_size": 10523929, "num_input_records": 4767, "num_output_records": 4244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336503559, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336505304, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.410017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592158 python3.9[107661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:16.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:17 np0005592158 python3.9[107813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592158 python3.9[107891]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:19 np0005592158 python3.9[108043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:42:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:42:20 np0005592158 python3.9[108198]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:20.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592158 python3.9[108400]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:22 np0005592158 python3.9[108552]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:22.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.760610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342760894, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 338, "num_deletes": 250, "total_data_size": 247179, "memory_usage": 253656, "flush_reason": "Manual Compaction"}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342925805, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 162536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9926, "largest_seqno": 10259, "table_properties": {"data_size": 160358, "index_size": 342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5649, "raw_average_key_size": 19, "raw_value_size": 156071, "raw_average_value_size": 534, "num_data_blocks": 14, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089337, "oldest_key_time": 1769089337, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 165413 microseconds, and 1771 cpu microseconds.
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.926031) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 162536 bytes OK
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.926098) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938491) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938541) EVENT_LOG_v1 {"time_micros": 1769089342938529, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.938566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 244803, prev total WAL file size 244803, number of live WAL files 2.
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.939445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(158KB)], [18(10MB)]
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342939486, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10686465, "oldest_snapshot_seqno": -1}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4026 keys, 7892140 bytes, temperature: kUnknown
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342981631, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7892140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862824, "index_size": 18134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99787, "raw_average_key_size": 24, "raw_value_size": 7787312, "raw_average_value_size": 1934, "num_data_blocks": 782, "num_entries": 4026, "num_filter_entries": 4026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.981971) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7892140 bytes
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.983719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.8 rd, 186.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(114.3) write-amplify(48.6) OK, records in: 4536, records dropped: 510 output_compression: NoCompression
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.983767) EVENT_LOG_v1 {"time_micros": 1769089342983747, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42272, "compaction_time_cpu_micros": 20477, "output_level": 6, "num_output_files": 1, "total_output_size": 7892140, "num_input_records": 4536, "num_output_records": 4026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342984027, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342985900, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.939352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:42:22.986100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:23 np0005592158 python3.9[108704]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:23 np0005592158 python3.9[108856]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:24 np0005592158 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 08:42:24 np0005592158 systemd[1]: session-39.scope: Consumed 30.304s CPU time.
Jan 22 08:42:24 np0005592158 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Jan 22 08:42:24 np0005592158 systemd-logind[787]: Removed session 39.
Jan 22 08:42:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:28.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:28 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:28.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:30.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:42:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:42:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:32 np0005592158 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:42:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:42:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:42:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592158 systemd-logind[787]: New session 40 of user zuul.
Jan 22 08:42:35 np0005592158 systemd[1]: Started Session 40 of User zuul.
Jan 22 08:42:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:37 np0005592158 python3.9[109039]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:42:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:38.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:42:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:42:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:38 np0005592158 python3.9[109191]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:40.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:40 np0005592158 python3.9[109345]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 08:42:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:41 np0005592158 python3.9[109497]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible._xxekxtc follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:42.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:42 np0005592158 python3.9[109622]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible._xxekxtc mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089361.1579506-108-135509256431941/.source._xxekxtc _original_basename=.o1khmyd3 follow=False checksum=9893b3bde8503c371031e4467aece9772279f87c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:43 np0005592158 python3.9[109774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:44.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592158 python3.9[109926]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012 create=True mode=0644 path=/tmp/ansible._xxekxtc state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592158 python3.9[110078]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._xxekxtc' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:46.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:46.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:46 np0005592158 python3.9[110232]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._xxekxtc state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:47 np0005592158 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 08:42:47 np0005592158 systemd[1]: session-40.scope: Consumed 5.079s CPU time.
Jan 22 08:42:47 np0005592158 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Jan 22 08:42:47 np0005592158 systemd-logind[787]: Removed session 40.
Jan 22 08:42:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:47 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:48.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:50.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:42:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:52.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:42:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:52 np0005592158 systemd-logind[787]: New session 41 of user zuul.
Jan 22 08:42:52 np0005592158 systemd[1]: Started Session 41 of User zuul.
Jan 22 08:42:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:54 np0005592158 python3.9[110410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:55 np0005592158 python3.9[110566]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:42:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:56.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:56 np0005592158 python3.9[110720]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:42:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:57 np0005592158 python3.9[110873]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:42:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:58.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:58 np0005592158 python3.9[111026]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:59 np0005592158 python3.9[111178]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:00 np0005592158 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 08:43:00 np0005592158 systemd[1]: session-41.scope: Consumed 3.843s CPU time.
Jan 22 08:43:00 np0005592158 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Jan 22 08:43:00 np0005592158 systemd-logind[787]: Removed session 41.
Jan 22 08:43:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:02.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:02 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:04.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:06 np0005592158 systemd-logind[787]: New session 42 of user zuul.
Jan 22 08:43:06 np0005592158 systemd[1]: Started Session 42 of User zuul.
Jan 22 08:43:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:06.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:07 np0005592158 python3.9[111356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:08 np0005592158 python3.9[111512]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:43:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:09 np0005592158 python3.9[111596]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:43:09 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:11 np0005592158 python3.9[111747]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:43:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:13 np0005592158 python3.9[111898]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:43:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:13 np0005592158 python3.9[112048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592158 python3.9[112198]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:14.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:15 np0005592158 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 08:43:15 np0005592158 systemd[1]: session-42.scope: Consumed 5.879s CPU time.
Jan 22 08:43:15 np0005592158 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Jan 22 08:43:15 np0005592158 systemd-logind[787]: Removed session 42.
Jan 22 08:43:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:16.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:18.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:21 np0005592158 systemd-logind[787]: New session 43 of user zuul.
Jan 22 08:43:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:21 np0005592158 systemd[1]: Started Session 43 of User zuul.
Jan 22 08:43:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592158 python3.9[112508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:24 np0005592158 python3.9[112664]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:43:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:43:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:24.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:24 np0005592158 python3.9[112816]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592158 python3.9[112968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:26 np0005592158 python3.9[113091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089405.0622706-155-92267369894686/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=45a6f40b402a0f4b7a12be1b6902e3f2431fd4a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:27 np0005592158 python3.9[113243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592158 python3.9[113366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089406.7139595-155-247888994883335/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=cc1c70588824ebebf3437effcc8b7daf397d0332 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:28 np0005592158 python3.9[113518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:28 np0005592158 python3.9[113641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089407.8758016-155-204376858758214/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=e5bff03c51cae308bb9493d7cdb7c5ec290ee48d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:29 np0005592158 python3.9[113793]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592158 python3.9[113945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:30.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592158 python3.9[114097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:31 np0005592158 python3.9[114270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089410.4250064-326-39280101560777/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=9a7f8c9243bfe06a5e62a169a5db356d4082d0fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:32 np0005592158 python3.9[114422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:32 np0005592158 python3.9[114545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089411.5760612-326-212427590257698/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:32.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:33 np0005592158 python3.9[114697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:33 np0005592158 python3.9[114820]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089412.754164-326-276235505967771/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=1dc995048b00a644a460f48b58c367088ca51907 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:34 np0005592158 python3.9[114972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:34.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:35 np0005592158 python3.9[115124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:35 np0005592158 python3.9[115276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:36 np0005592158 python3.9[115399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089415.2755413-498-124885537614236/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=202ca5a0fe6e8422be7d63e3db24707225b535c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:37 np0005592158 python3.9[115551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:37 np0005592158 python3.9[115674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089416.4914997-498-123452227466045/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:38 np0005592158 python3.9[115826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:38 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:38 np0005592158 python3.9[115949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089417.7513394-498-94923360586094/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=4258078fcdb3d37440c80fd4a45a43efed1545fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:38.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:40 np0005592158 python3.9[116101]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:40 np0005592158 python3.9[116253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:40.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:41 np0005592158 python3.9[116376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089420.252835-688-128828306301086/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592158 python3.9[116528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:42 np0005592158 python3.9[116680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:42.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:43 np0005592158 python3.9[116803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089422.1715617-768-185721899192685/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:44 np0005592158 python3.9[116955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:44 np0005592158 python3.9[117107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:44.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:45 np0005592158 python3.9[117230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089424.235006-845-115837118164524/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:46 np0005592158 python3.9[117382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:46 np0005592158 python3.9[117534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:47 np0005592158 python3.9[117657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089426.27831-916-210134406126293/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:48 np0005592158 python3.9[117809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:48 np0005592158 python3.9[117961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:48.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:49 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:49 np0005592158 python3.9[118084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089428.3206208-987-131121250486384/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:50 np0005592158 python3.9[118236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:50 np0005592158 python3.9[118388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:51 np0005592158 python3.9[118511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089430.2663488-1054-179678444151943/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:54 np0005592158 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 08:43:54 np0005592158 systemd[1]: session-43.scope: Consumed 22.612s CPU time.
Jan 22 08:43:54 np0005592158 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Jan 22 08:43:54 np0005592158 systemd-logind[787]: Removed session 43.
Jan 22 08:43:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:54.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:43:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:59.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592158 systemd-logind[787]: New session 44 of user zuul.
Jan 22 08:44:00 np0005592158 systemd[1]: Started Session 44 of User zuul.
Jan 22 08:44:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:01 np0005592158 python3.9[118691]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592158 python3.9[118843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:02 np0005592158 python3.9[118966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089441.301294-63-89906202216053/.source.conf _original_basename=ceph.conf follow=False checksum=c3a8ec6ec08fd3904e44a403280c0742b2934d96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:02 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:44:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:02.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:44:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:03 np0005592158 python3.9[119118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:03 np0005592158 python3.9[119241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089442.8524778-63-252387533520543/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:04 np0005592158 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 08:44:04 np0005592158 systemd[1]: session-44.scope: Consumed 2.666s CPU time.
Jan 22 08:44:04 np0005592158 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Jan 22 08:44:04 np0005592158 systemd-logind[787]: Removed session 44.
Jan 22 08:44:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:04.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:06.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:44:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:44:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:09 np0005592158 systemd-logind[787]: New session 45 of user zuul.
Jan 22 08:44:09 np0005592158 systemd[1]: Started Session 45 of User zuul.
Jan 22 08:44:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:10 np0005592158 python3.9[119419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:11.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:12 np0005592158 python3.9[119575]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:12.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:13 np0005592158 python3.9[119727]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:13 np0005592158 python3.9[119877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:14 np0005592158 python3.9[120029]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:44:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:17 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 08:44:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:17 np0005592158 python3.9[120185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:44:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:18 np0005592158 python3.9[120269]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:44:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:18.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:19 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:20 np0005592158 python3.9[120422]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:44:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:22 np0005592158 python3[120577]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 08:44:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:22 np0005592158 python3.9[120729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592158 python3.9[120881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:24 np0005592158 python3.9[120959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:25 np0005592158 python3.9[121111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:25 np0005592158 python3.9[121189]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.f69enjq8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:26 np0005592158 python3.9[121341]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:26 np0005592158 python3.9[121419]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:28 np0005592158 python3.9[121571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:28 np0005592158 python3[121724]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:29 np0005592158 python3.9[121876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:30 np0005592158 python3.9[122001]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089469.4487805-432-279008555723522/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:31 np0005592158 python3.9[122253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:32 np0005592158 python3.9[122511]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089471.0949771-477-252415074622759/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592158 python3.9[122680]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:44:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:33 np0005592158 python3.9[122805]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089472.5086403-522-111495984059400/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:34 np0005592158 python3.9[122957]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:44:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.5 total, 600.0 interval#012Cumulative writes: 6027 writes, 25K keys, 6027 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6027 writes, 961 syncs, 6.27 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6027 writes, 25K keys, 6027 commit groups, 1.0 writes per commit group, ingest: 19.25 MB, 0.03 MB/s#012Interval WAL: 6027 writes, 961 syncs, 6.27 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 22 08:44:34 np0005592158 python3.9[123082]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089473.9480953-567-130309993562111/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592158 python3.9[123234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:36 np0005592158 python3.9[123359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089475.3187177-612-238158840577734/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:37 np0005592158 python3.9[123511]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:38 np0005592158 python3.9[123663]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:38 np0005592158 python3.9[123818]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:39 np0005592158 python3.9[123970]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:40 np0005592158 python3.9[124173]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:41 np0005592158 python3.9[124327]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:42 np0005592158 python3.9[124482]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:43 np0005592158 python3.9[124632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:45 np0005592158 python3.9[124785]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:45 np0005592158 ovs-vsctl[124786]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 08:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592158 python3.9[124938]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592158 python3.9[125093]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592158 ovs-vsctl[125094]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 08:44:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:47 np0005592158 python3.9[125244]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:48 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:48 np0005592158 python3.9[125399]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:49 np0005592158 python3.9[125551]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:50 np0005592158 python3.9[125629]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:50 np0005592158 python3.9[125781]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:51 np0005592158 python3.9[125859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:44:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592158 python3.9[126011]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:52 np0005592158 python3.9[126163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:53 np0005592158 python3.9[126241]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:54 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:54 np0005592158 python3.9[126393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:54 np0005592158 python3.9[126471]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:55.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:55 np0005592158 python3.9[126623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:44:55 np0005592158 systemd[1]: Reloading.
Jan 22 08:44:55 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:44:55 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:57 np0005592158 python3.9[126814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:58 np0005592158 python3.9[126892]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:59 np0005592158 python3.9[127044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:59 np0005592158 python3.9[127122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:00 np0005592158 python3.9[127274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:00 np0005592158 systemd[1]: Reloading.
Jan 22 08:45:00 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:00 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:00 np0005592158 systemd[1]: Starting Create netns directory...
Jan 22 08:45:01 np0005592158 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:45:01 np0005592158 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:45:01 np0005592158 systemd[1]: Finished Create netns directory.
Jan 22 08:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:02 np0005592158 python3.9[127468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592158 python3.9[127620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:03 np0005592158 python3.9[127743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089502.4659057-1365-191987629783071/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:04 np0005592158 python3.9[127895]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:05 np0005592158 python3.9[128047]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:06 np0005592158 python3.9[128199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:07 np0005592158 python3.9[128322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089505.934543-1464-156752273844533/.source.json _original_basename=.sa1r0ghs follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.627597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507627697, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2355, "num_deletes": 251, "total_data_size": 4759955, "memory_usage": 4808176, "flush_reason": "Manual Compaction"}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507647596, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3097044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10264, "largest_seqno": 12614, "table_properties": {"data_size": 3088227, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21978, "raw_average_key_size": 20, "raw_value_size": 3068799, "raw_average_value_size": 2919, "num_data_blocks": 220, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089343, "oldest_key_time": 1769089343, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20049 microseconds, and 9012 cpu microseconds.
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.647645) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3097044 bytes OK
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.647686) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649370) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649382) EVENT_LOG_v1 {"time_micros": 1769089507649378, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.649402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4749195, prev total WAL file size 4749195, number of live WAL files 2.
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650598) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3024KB)], [21(7707KB)]
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507650699, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10989184, "oldest_snapshot_seqno": -1}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4558 keys, 8311847 bytes, temperature: kUnknown
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507704752, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8311847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8280006, "index_size": 19315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 112589, "raw_average_key_size": 24, "raw_value_size": 8195986, "raw_average_value_size": 1798, "num_data_blocks": 819, "num_entries": 4558, "num_filter_entries": 4558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.705083) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8311847 bytes
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.706817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 5077, records dropped: 519 output_compression: NoCompression
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.706839) EVENT_LOG_v1 {"time_micros": 1769089507706827, "job": 10, "event": "compaction_finished", "compaction_time_micros": 54160, "compaction_time_cpu_micros": 21605, "output_level": 6, "num_output_files": 1, "total_output_size": 8311847, "num_input_records": 5077, "num_output_records": 4558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507707570, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507709533, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.650508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:45:07.709683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592158 python3.9[128472]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:10 np0005592158 python3.9[128895]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 08:45:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592158 python3.9[129047]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:45:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:14 np0005592158 python3[129199]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:45:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:15.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:17.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:19 np0005592158 podman[129212]: 2026-01-22 13:45:19.116774607 +0000 UTC m=+4.899361579 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:19 np0005592158 podman[129331]: 2026-01-22 13:45:19.26073347 +0000 UTC m=+0.055324800 container create 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 22 08:45:19 np0005592158 podman[129331]: 2026-01-22 13:45:19.227831767 +0000 UTC m=+0.022423117 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592158 python3[129199]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:19 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:21 np0005592158 python3.9[129521]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:22 np0005592158 python3.9[129675]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:23 np0005592158 python3.9[129751]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:24 np0005592158 python3.9[129902]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089523.5244226-1698-85458426902583/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:25 np0005592158 python3.9[129978]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:45:25 np0005592158 systemd[1]: Reloading.
Jan 22 08:45:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:25 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:25 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:26 np0005592158 python3.9[130089]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:26 np0005592158 systemd[1]: Reloading.
Jan 22 08:45:26 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:26 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:26 np0005592158 systemd[1]: Starting ovn_controller container...
Jan 22 08:45:26 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:45:26 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12dd6c5e4c9b17a9594d6d4a4b5c6490265d8b0ad3b98c5fc37508ca98ce00b3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:26 np0005592158 systemd[1]: Started /usr/bin/podman healthcheck run 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536.
Jan 22 08:45:26 np0005592158 podman[130129]: 2026-01-22 13:45:26.850215344 +0000 UTC m=+0.134718391 container init 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 08:45:26 np0005592158 ovn_controller[130144]: + sudo -E kolla_set_configs
Jan 22 08:45:26 np0005592158 podman[130129]: 2026-01-22 13:45:26.871332874 +0000 UTC m=+0.155835901 container start 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 08:45:26 np0005592158 edpm-start-podman-container[130129]: ovn_controller
Jan 22 08:45:26 np0005592158 systemd[1]: Created slice User Slice of UID 0.
Jan 22 08:45:26 np0005592158 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 08:45:26 np0005592158 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 08:45:26 np0005592158 edpm-start-podman-container[130128]: Creating additional drop-in dependency for "ovn_controller" (89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536)
Jan 22 08:45:26 np0005592158 systemd[1]: Starting User Manager for UID 0...
Jan 22 08:45:26 np0005592158 podman[130150]: 2026-01-22 13:45:26.944224316 +0000 UTC m=+0.060733529 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:45:26 np0005592158 systemd[1]: 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536-1384928aa75b3952.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 08:45:26 np0005592158 systemd[1]: 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536-1384928aa75b3952.service: Failed with result 'exit-code'.
Jan 22 08:45:26 np0005592158 systemd[1]: Reloading.
Jan 22 08:45:27 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:27 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:27 np0005592158 systemd[130185]: Queued start job for default target Main User Target.
Jan 22 08:45:27 np0005592158 systemd[130185]: Created slice User Application Slice.
Jan 22 08:45:27 np0005592158 systemd[130185]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 08:45:27 np0005592158 systemd[130185]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:27 np0005592158 systemd[130185]: Reached target Paths.
Jan 22 08:45:27 np0005592158 systemd[130185]: Reached target Timers.
Jan 22 08:45:27 np0005592158 systemd[130185]: Starting D-Bus User Message Bus Socket...
Jan 22 08:45:27 np0005592158 systemd[130185]: Starting Create User's Volatile Files and Directories...
Jan 22 08:45:27 np0005592158 systemd[130185]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:45:27 np0005592158 systemd[130185]: Finished Create User's Volatile Files and Directories.
Jan 22 08:45:27 np0005592158 systemd[130185]: Reached target Sockets.
Jan 22 08:45:27 np0005592158 systemd[130185]: Reached target Basic System.
Jan 22 08:45:27 np0005592158 systemd[130185]: Reached target Main User Target.
Jan 22 08:45:27 np0005592158 systemd[130185]: Startup finished in 124ms.
Jan 22 08:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:27 np0005592158 systemd[1]: Started User Manager for UID 0.
Jan 22 08:45:27 np0005592158 systemd[1]: Started ovn_controller container.
Jan 22 08:45:27 np0005592158 systemd[1]: Started Session c1 of User root.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: INFO:__main__:Validating config file
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: INFO:__main__:Writing out command to execute
Jan 22 08:45:27 np0005592158 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: ++ cat /run_command
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + ARGS=
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + sudo kolla_copy_cacerts
Jan 22 08:45:27 np0005592158 systemd[1]: Started Session c2 of User root.
Jan 22 08:45:27 np0005592158 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + [[ ! -n '' ]]
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + . kolla_extend_start
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + umask 0022
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:27.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 08:45:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.4956] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.4971] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:45:27 np0005592158 kernel: br-int: entered promiscuous mode
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <warn>  [1769089527.4976] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:45:27 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.4991] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.4998] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.5004] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 08:45:27 np0005592158 systemd-udevd[130277]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:27 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:27Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.6449] manager: (ovn-d9fd1e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 08:45:27 np0005592158 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.6675] device (genev_sys_6081): carrier: link connected
Jan 22 08:45:27 np0005592158 NetworkManager[48926]: <info>  [1769089527.6678] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 08:45:28 np0005592158 NetworkManager[48926]: <info>  [1769089528.0352] manager: (ovn-c4fa18-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 08:45:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:28 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:28 np0005592158 NetworkManager[48926]: <info>  [1769089528.5614] manager: (ovn-7335e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 08:45:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:29 np0005592158 python3.9[130407]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:30 np0005592158 python3.9[130560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:31 np0005592158 python3.9[130683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089530.0557806-1833-76434921303399/.source.yaml _original_basename=.3wxv79t1 follow=False checksum=46f66c8a157c96fcb7cc69848fe925e114c66b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2008 writes, 12K keys, 2008 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2008 writes, 2008 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2008 writes, 12K keys, 2008 commit groups, 1.0 writes per commit group, ingest: 23.79 MB, 0.04 MB/s#012Interval WAL: 2008 writes, 2008 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     54.2      0.27              0.04         5    0.054       0      0       0.0       0.0#012  L6      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    163.0    135.8      0.25              0.09         4    0.062     18K   1808       0.0       0.0#012 Sum      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     78.0     93.2      0.52              0.12         9    0.058     18K   1808       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     78.3     93.6      0.52              0.12         8    0.065     18K   1808       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    163.0    135.8      0.25              0.09         4    0.062     18K   1808       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.6      0.27              0.04         4    0.067       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.014#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 1.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(62,1.13 MB,0.37106%) FilterBlock(9,59.98 KB,0.0192692%) IndexBlock(9,116.08 KB,0.0372887%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:45:32 np0005592158 python3.9[130835]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:32 np0005592158 ovs-vsctl[130836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 08:45:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592158 python3.9[130988]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:32 np0005592158 ovs-vsctl[130990]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 08:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:33.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:34 np0005592158 python3.9[131143]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:34 np0005592158 ovs-vsctl[131144]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 08:45:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:34 np0005592158 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 08:45:34 np0005592158 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Jan 22 08:45:34 np0005592158 systemd[1]: session-45.scope: Consumed 58.908s CPU time.
Jan 22 08:45:34 np0005592158 systemd-logind[787]: Removed session 45.
Jan 22 08:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:35.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:37 np0005592158 systemd[1]: Stopping User Manager for UID 0...
Jan 22 08:45:37 np0005592158 systemd[130185]: Activating special unit Exit the Session...
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped target Main User Target.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped target Basic System.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped target Paths.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped target Sockets.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped target Timers.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:37 np0005592158 systemd[130185]: Closed D-Bus User Message Bus Socket.
Jan 22 08:45:37 np0005592158 systemd[130185]: Stopped Create User's Volatile Files and Directories.
Jan 22 08:45:37 np0005592158 systemd[130185]: Removed slice User Application Slice.
Jan 22 08:45:37 np0005592158 systemd[130185]: Reached target Shutdown.
Jan 22 08:45:37 np0005592158 systemd[130185]: Finished Exit the Session.
Jan 22 08:45:37 np0005592158 systemd[130185]: Reached target Exit the Session.
Jan 22 08:45:37 np0005592158 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 08:45:37 np0005592158 systemd[1]: Stopped User Manager for UID 0.
Jan 22 08:45:37 np0005592158 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 08:45:37 np0005592158 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 08:45:37 np0005592158 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 08:45:37 np0005592158 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 08:45:37 np0005592158 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 08:45:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:40 np0005592158 systemd-logind[787]: New session 47 of user zuul.
Jan 22 08:45:40 np0005592158 systemd[1]: Started Session 47 of User zuul.
Jan 22 08:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:41 np0005592158 python3.9[131439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 08:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592158 python3.9[131617]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:43 np0005592158 python3.9[131769]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:44 np0005592158 python3.9[131921]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:45 np0005592158 python3.9[132073]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:45 np0005592158 python3.9[132225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:45:46 np0005592158 python3.9[132376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:47.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:45:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:47 np0005592158 python3.9[132528]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:45:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:48 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:49.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:49 np0005592158 python3.9[132678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:50 np0005592158 python3.9[132799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089548.7459276-220-169547906417519/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:50 np0005592158 python3.9[132949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:51 np0005592158 python3.9[133070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089550.4398472-265-262474158336582/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592158 python3.9[133222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:45:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:53.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:53 np0005592158 python3.9[133356]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:45:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:55.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:56 np0005592158 python3.9[133509]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:45:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:57Z|00025|memory|INFO|16512 kB peak resident set size after 29.7 seconds
Jan 22 08:45:57 np0005592158 ovn_controller[130144]: 2026-01-22T13:45:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 08:45:57 np0005592158 podman[133663]: 2026-01-22 13:45:57.1260599 +0000 UTC m=+0.106610106 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:45:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:57 np0005592158 python3.9[133662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592158 python3.9[133807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089556.6324115-376-121923078544855/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:59 np0005592158 python3.9[133957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:59.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:45:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:59 np0005592158 python3.9[134078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089558.6265862-376-58653762888516/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:01.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:01 np0005592158 python3.9[134228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:01 np0005592158 python3.9[134349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089560.8851347-508-265462852637144/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:02 np0005592158 python3.9[134499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:03.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:03 np0005592158 python3.9[134620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089562.2240312-508-245587857674355/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:04 np0005592158 python3.9[134770]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:05 np0005592158 python3.9[134924]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:05.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:06 np0005592158 python3.9[135076]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:06 np0005592158 python3.9[135154]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:07.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:07 np0005592158 python3.9[135306]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:07 np0005592158 python3.9[135384]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:08 np0005592158 python3.9[135536]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:09.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:09 np0005592158 python3.9[135688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:10 np0005592158 python3.9[135766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:11 np0005592158 python3.9[135918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:11 np0005592158 python3.9[135996]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592158 python3.9[136148]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:13 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:13.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:13 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:13 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:14 np0005592158 python3.9[136337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:14 np0005592158 python3.9[136415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:15 np0005592158 python3.9[136567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:16 np0005592158 python3.9[136645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:17 np0005592158 python3.9[136797]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:17 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:17 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:17 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:17.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:17 np0005592158 systemd[1]: Starting Create netns directory...
Jan 22 08:46:17 np0005592158 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:46:17 np0005592158 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:46:17 np0005592158 systemd[1]: Finished Create netns directory.
Jan 22 08:46:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592158 python3.9[136992]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:19 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:19.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:19 np0005592158 python3.9[137144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:19 np0005592158 python3.9[137268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089578.8318424-961-72795933302664/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:21 np0005592158 python3.9[137420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:21 np0005592158 python3.9[137572]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:22 np0005592158 python3.9[137724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:23 np0005592158 python3.9[137847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089582.1922517-1060-166932575519149/.source.json _original_basename=.mc849uot follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:23.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:24 np0005592158 python3.9[137997]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:26 np0005592158 python3.9[138420]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 08:46:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:27 np0005592158 podman[138544]: 2026-01-22 13:46:27.559311579 +0000 UTC m=+0.085581345 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:46:27 np0005592158 python3.9[138591]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:46:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:28 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:29 np0005592158 python3[138750]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:46:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:33.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:38 np0005592158 podman[138762]: 2026-01-22 13:46:38.966347258 +0000 UTC m=+9.785166721 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:39 np0005592158 podman[138895]: 2026-01-22 13:46:39.144749798 +0000 UTC m=+0.058500888 container create 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 08:46:39 np0005592158 podman[138895]: 2026-01-22 13:46:39.115031666 +0000 UTC m=+0.028782766 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:39 np0005592158 python3[138750]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:39 np0005592158 python3.9[139085]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:41 np0005592158 python3.9[139239]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:41 np0005592158 python3.9[139315]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:42 np0005592158 python3.9[139466]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089601.7177265-1294-96858669511986/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:42 np0005592158 python3.9[139542]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:42 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:43 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:43 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:43.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:44 np0005592158 python3.9[139653]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:44 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:44 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:44 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:44 np0005592158 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 08:46:45 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:46:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1854a7059a530ec13fd336313dc43f22959daca98bb830b9b905c42edd9e391b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1854a7059a530ec13fd336313dc43f22959daca98bb830b9b905c42edd9e391b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592158 systemd[1]: Started /usr/bin/podman healthcheck run 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69.
Jan 22 08:46:45 np0005592158 podman[139694]: 2026-01-22 13:46:45.226412276 +0000 UTC m=+0.199053991 container init 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + sudo -E kolla_set_configs
Jan 22 08:46:45 np0005592158 podman[139694]: 2026-01-22 13:46:45.265074794 +0000 UTC m=+0.237716509 container start 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 08:46:45 np0005592158 edpm-start-podman-container[139694]: ovn_metadata_agent
Jan 22 08:46:45 np0005592158 edpm-start-podman-container[139693]: Creating additional drop-in dependency for "ovn_metadata_agent" (49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69)
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Validating config file
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Copying service configuration files
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Writing out command to execute
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 08:46:45 np0005592158 podman[139717]: 2026-01-22 13:46:45.33587984 +0000 UTC m=+0.058296862 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: ++ cat /run_command
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + CMD=neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + ARGS=
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + sudo kolla_copy_cacerts
Jan 22 08:46:45 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + [[ ! -n '' ]]
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + . kolla_extend_start
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + umask 0022
Jan 22 08:46:45 np0005592158 ovn_metadata_agent[139710]: + exec neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:45 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:45 np0005592158 systemd[1]: Started ovn_metadata_agent container.
Jan 22 08:46:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:46 np0005592158 python3.9[139945]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:46:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:47.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.382 139715 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.383 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.384 139715 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.385 139715 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.386 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.387 139715 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.388 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.389 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.390 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.391 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.392 139715 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.393 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.394 139715 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.395 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.396 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.397 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.398 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.399 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.400 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.401 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.402 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.403 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.404 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.405 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.406 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.407 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.408 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.409 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.410 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.411 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.412 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.413 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.414 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.415 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.416 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.417 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.418 139715 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.428 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.428 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.429 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.444 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c803af81-5cf0-46ac-8f46-401e876a838c (UUID: c803af81-5cf0-46ac-8f46-401e876a838c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.468 139715 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.468 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.469 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.469 139715 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.474 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.480 139715 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.486 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c803af81-5cf0-46ac-8f46-401e876a838c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd8a7b85640>], external_ids={}, name=c803af81-5cf0-46ac-8f46-401e876a838c, nb_cfg_timestamp=1769089535619, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.487 139715 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd8a7b74f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.489 139715 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.493 139715 DEBUG oslo_service.service [-] Started child 139970 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.497 139715 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpcaxevftl/privsep.sock']#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.500 139970 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-427228'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.531 139970 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.532 139970 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.532 139970 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.536 139970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.542 139970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:47.550 139970 INFO eventlet.wsgi.server [-] (139970) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 22 08:46:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:48 np0005592158 python3.9[140102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:48 np0005592158 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.360 139715 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.361 139715 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcaxevftl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.137 140104 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.144 140104 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.147 140104 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.147 140104 INFO oslo.privsep.daemon [-] privsep daemon running as pid 140104#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.364 140104 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3e6009-09c6-446f-a39a-d2d40e66cdc2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:48 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:48 np0005592158 python3.9[140232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089607.5571723-1429-237980037018307/.source.yaml _original_basename=.jeqc3w_n follow=False checksum=a7c93daf1344287e5303b3d1648c714a9349cb4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:46:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:48.929 140104 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:46:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:49 np0005592158 systemd[1]: session-47.scope: Deactivated successfully.
Jan 22 08:46:49 np0005592158 systemd[1]: session-47.scope: Consumed 57.881s CPU time.
Jan 22 08:46:49 np0005592158 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Jan 22 08:46:49 np0005592158 systemd-logind[787]: Removed session 47.
Jan 22 08:46:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.624 140104 DEBUG oslo.privsep.daemon [-] privsep: reply[6c89a9c8-e3c6-4f91-a9ea-9e42da5ef136]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.626 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, column=external_ids, values=({'neutron:ovn-metadata-id': '99503455-a922-596d-bbdf-dff82d80b62f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.636 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.642 139715 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.643 139715 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.644 139715 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.645 139715 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.646 139715 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.647 139715 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.648 139715 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.649 139715 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.650 139715 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.651 139715 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.652 139715 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.653 139715 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.654 139715 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.655 139715 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.656 139715 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.657 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.658 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.659 139715 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.660 139715 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.661 139715 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.662 139715 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.663 139715 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.664 139715 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.665 139715 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.666 139715 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.667 139715 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.668 139715 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.669 139715 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.670 139715 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.671 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.672 139715 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.673 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.674 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.675 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.676 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:46:49.677 139715 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:53.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:46:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:46:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:46:54 np0005592158 systemd-logind[787]: New session 48 of user zuul.
Jan 22 08:46:54 np0005592158 systemd[1]: Started Session 48 of User zuul.
Jan 22 08:46:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:55.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:55.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:55 np0005592158 python3.9[140542]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:46:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:56 np0005592158 python3.9[140698]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:46:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:57.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:58 np0005592158 podman[140830]: 2026-01-22 13:46:58.135585885 +0000 UTC m=+0.112955031 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:46:58 np0005592158 python3.9[140881]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:58 np0005592158 systemd[1]: Reloading.
Jan 22 08:46:58 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:58 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:46:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:59.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:59 np0005592158 python3.9[141075]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:46:59 np0005592158 network[141092]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:46:59 np0005592158 network[141093]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:46:59 np0005592158 network[141094]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:47:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:01.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:01.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:03.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:05.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:06 np0005592158 python3.9[141406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:07 np0005592158 python3.9[141559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:08 np0005592158 python3.9[141712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:09.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:10 np0005592158 python3.9[141865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:11 np0005592158 python3.9[142018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592158 python3.9[142171]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:13.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:13.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:13 np0005592158 python3.9[142324]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:15 np0005592158 podman[142449]: 2026-01-22 13:47:15.76439449 +0000 UTC m=+0.068360917 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 08:47:15 np0005592158 python3.9[142493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592158 python3.9[142649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:17 np0005592158 python3.9[142801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:17.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:17 np0005592158 python3.9[142953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:18 np0005592158 python3.9[143105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:19 np0005592158 python3.9[143257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:19.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:19.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:20 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:20 np0005592158 python3.9[143409]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:21 np0005592158 python3.9[143561]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:21.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:21 np0005592158 python3.9[143713]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:22 np0005592158 python3.9[143865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592158 python3.9[144017]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:23.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:23.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:23 np0005592158 python3.9[144169]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:24 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:24 np0005592158 python3.9[144321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:25.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:25 np0005592158 python3.9[144473]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592158 python3.9[144625]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:27 np0005592158 python3.9[144777]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:47:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592158 podman[144901]: 2026-01-22 13:47:29.067893865 +0000 UTC m=+0.108460039 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:47:29 np0005592158 python3.9[144944]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:47:29 np0005592158 systemd[1]: Reloading.
Jan 22 08:47:29 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:47:29 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:47:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:29.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:29.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:30 np0005592158 python3.9[145140]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:31 np0005592158 python3.9[145293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:32 np0005592158 python3.9[145446]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:32 np0005592158 python3.9[145599]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:33 np0005592158 python3.9[145752]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:33.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:34 np0005592158 python3.9[145905]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:34 np0005592158 python3.9[146058]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:37.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:38 np0005592158 python3.9[146211]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 08:47:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:39 np0005592158 python3.9[146364]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:47:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:39.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:39.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:40 np0005592158 python3.9[146522]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:47:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:41.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:41.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592158 python3.9[146682]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:47:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:43.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:43.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:43 np0005592158 python3.9[146766]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:47:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:45.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:45.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:46 np0005592158 podman[146770]: 2026-01-22 13:47:46.102606903 +0000 UTC m=+0.088581362 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:47:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:47:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:47:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:47.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:47.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:49.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:49 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:49.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:53.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:57.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:47:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:59.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:47:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:47:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:59.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:00 np0005592158 podman[146798]: 2026-01-22 13:48:00.163997564 +0000 UTC m=+0.134559746 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 08:48:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:01.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:48:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:48:03 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:03.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:05.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:07.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:09.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:10 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:11.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:16 np0005592158 podman[147155]: 2026-01-22 13:48:16.412893889 +0000 UTC m=+0.087915127 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:17.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:19.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:21.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592158 kernel: SELinux:  Converting 2775 SID table entries...
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:27 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:30 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 08:48:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592158 podman[147209]: 2026-01-22 13:48:31.156309259 +0000 UTC m=+0.117915915 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 08:48:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:31.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:39 np0005592158 kernel: SELinux:  Converting 2775 SID table entries...
Jan 22 08:48:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:39 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:41.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.876170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722876225, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3088, "num_deletes": 507, "total_data_size": 5953851, "memory_usage": 6059256, "flush_reason": "Manual Compaction"}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722903952, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3880836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12619, "largest_seqno": 15702, "table_properties": {"data_size": 3869661, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 30436, "raw_average_key_size": 20, "raw_value_size": 3843024, "raw_average_value_size": 2563, "num_data_blocks": 276, "num_entries": 1499, "num_filter_entries": 1499, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089508, "oldest_key_time": 1769089508, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27850 microseconds, and 8969 cpu microseconds.
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.904028) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3880836 bytes OK
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.904049) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906180) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906196) EVENT_LOG_v1 {"time_micros": 1769089722906191, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.906216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5938988, prev total WAL file size 5938988, number of live WAL files 2.
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.907685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3789KB)], [24(8117KB)]
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722907738, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12192683, "oldest_snapshot_seqno": -1}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 5026 keys, 10032565 bytes, temperature: kUnknown
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722988316, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 10032565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9996957, "index_size": 21930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125776, "raw_average_key_size": 25, "raw_value_size": 9903805, "raw_average_value_size": 1970, "num_data_blocks": 912, "num_entries": 5026, "num_filter_entries": 5026, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.988602) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10032565 bytes
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.991378) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.1 rd, 124.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6057, records dropped: 1031 output_compression: NoCompression
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.991399) EVENT_LOG_v1 {"time_micros": 1769089722991388, "job": 12, "event": "compaction_finished", "compaction_time_micros": 80677, "compaction_time_cpu_micros": 27521, "output_level": 6, "num_output_files": 1, "total_output_size": 10032565, "num_input_records": 6057, "num_output_records": 5026, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722992072, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722993259, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.907589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:48:42.993301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:43.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:45.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:46 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 08:48:47 np0005592158 podman[147240]: 2026-01-22 13:48:47.075808518 +0000 UTC m=+0.056258431 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.421 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:48:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:48:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:48 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:49 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:49.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:49.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:53.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:55.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:55.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:48:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:48:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:59.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:48:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:59.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:01.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:02 np0005592158 podman[153242]: 2026-01-22 13:49:02.114089675 +0000 UTC m=+0.093028304 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 08:49:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:05.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:07.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:07.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:09.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:09 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:11.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:13.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:15.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:15.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:17.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:17.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:18 np0005592158 podman[163894]: 2026-01-22 13:49:18.06544634 +0000 UTC m=+0.054037177 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:49:19 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:19.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:21.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:25.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:27.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:27.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:29.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592158 podman[164308]: 2026-01-22 13:49:33.193444319 +0000 UTC m=+0.132492327 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 08:49:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:33.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:34 np0005592158 kernel: SELinux:  Converting 2776 SID table entries...
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:49:34 np0005592158 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:49:35 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:49:35 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 08:49:35 np0005592158 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 22 08:49:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:35.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:36 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:39.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:41.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:43 np0005592158 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 08:49:43 np0005592158 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 08:49:43 np0005592158 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 08:49:43 np0005592158 systemd[1]: sshd.service: Consumed 2.498s CPU time, read 564.0K from disk, written 8.0K to disk.
Jan 22 08:49:43 np0005592158 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 08:49:43 np0005592158 systemd[1]: Stopping sshd-keygen.target...
Jan 22 08:49:43 np0005592158 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:43 np0005592158 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:43 np0005592158 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:43 np0005592158 systemd[1]: Reached target sshd-keygen.target.
Jan 22 08:49:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592158 systemd[1]: Starting OpenSSH server daemon...
Jan 22 08:49:43 np0005592158 systemd[1]: Started OpenSSH server daemon.
Jan 22 08:49:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:43.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:49:45 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:49:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:45.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:45 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:45 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:45 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:46 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:49:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.422 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:49:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:49:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:49 np0005592158 podman[168876]: 2026-01-22 13:49:49.072785263 +0000 UTC m=+0.056950926 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:49:49 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:49.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:49.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:51 np0005592158 python3.9[171430]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:51 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:52 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:52 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:53 np0005592158 python3.9[172990]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:53 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:53 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:53 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:53.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:54 np0005592158 python3.9[174163]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:54 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:54 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:54 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:54 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:49:54 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:49:54 np0005592158 systemd[1]: man-db-cache-update.service: Consumed 11.167s CPU time.
Jan 22 08:49:54 np0005592158 systemd[1]: run-r9e99c1be9736462a9f21298dbcda3d62.service: Deactivated successfully.
Jan 22 08:49:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:55 np0005592158 python3.9[174612]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:55 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:55 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:55 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:55.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:57 np0005592158 python3.9[174802]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:57 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:57 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:57 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:57.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:58 np0005592158 python3.9[174992]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:58 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:58 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:58 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:58 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:59 np0005592158 python3.9[175181]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:59 np0005592158 systemd[1]: Reloading.
Jan 22 08:49:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:59 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:59 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:49:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:00 np0005592158 python3.9[175371]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:01 np0005592158 python3.9[175526]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:01 np0005592158 systemd[1]: Reloading.
Jan 22 08:50:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:01.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:01 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:04 np0005592158 podman[175590]: 2026-01-22 13:50:04.117514011 +0000 UTC m=+0.100342522 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 08:50:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592158 python3.9[175742]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:50:05 np0005592158 systemd[1]: Reloading.
Jan 22 08:50:05 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:05 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:05 np0005592158 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 08:50:05 np0005592158 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 08:50:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:05.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:06 np0005592158 python3.9[175935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592158 python3.9[176090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:07.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:08 np0005592158 python3.9[176245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:08 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:09 np0005592158 python3.9[176400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:09 np0005592158 python3.9[176555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:09.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:10 np0005592158 python3.9[176710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592158 python3.9[176865]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:11.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:12 np0005592158 python3.9[177020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:12 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:13 np0005592158 python3.9[177175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:14 np0005592158 python3.9[177330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:14 np0005592158 python3.9[177485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:15 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:15 np0005592158 python3.9[177640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:16 np0005592158 python3.9[177795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:17 np0005592158 python3.9[177950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:19 np0005592158 podman[178105]: 2026-01-22 13:50:19.190123197 +0000 UTC m=+0.059392683 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 08:50:19 np0005592158 python3.9[178106]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:19 np0005592158 python3.9[178276]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592158 python3.9[178428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:21 np0005592158 python3.9[178580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:22 np0005592158 python3.9[178732]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:22 np0005592158 python3.9[178884]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:23.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:23 np0005592158 python3.9[179034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:50:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:24 np0005592158 python3.9[179186]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:25 np0005592158 python3.9[179311]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089824.164804-1647-202505359062767/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:25.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592158 python3.9[179463]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:27 np0005592158 python3.9[179588]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089825.8114429-1647-46758468724157/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:27 np0005592158 python3.9[179742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:27.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:28 np0005592158 python3.9[179867]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089827.2377377-1647-171893181744896/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:29 np0005592158 python3.9[180019]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:29 np0005592158 python3.9[180144]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089828.5214965-1647-226599488530983/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:30 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:30 np0005592158 python3.9[180296]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:31 np0005592158 python3.9[180421]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089830.0696983-1647-228333974146193/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:31 np0005592158 python3.9[180573]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:32 np0005592158 python3.9[180698]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089831.3555472-1647-78635412338917/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592158 python3.9[180850]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:33 np0005592158 python3.9[180973]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089832.6156254-1647-72938286260976/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:34 np0005592158 podman[181125]: 2026-01-22 13:50:34.276596018 +0000 UTC m=+0.077906965 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 08:50:34 np0005592158 python3.9[181126]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:34 np0005592158 python3.9[181276]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089833.892297-1647-45991852912940/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:37 np0005592158 python3.9[181428]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 08:50:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:38 np0005592158 python3.9[181712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592158 python3.9[181864]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:39 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:50:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:40.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:40 np0005592158 python3.9[182016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:50:40 np0005592158 python3.9[182168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592158 python3.9[182320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:42 np0005592158 python3.9[182472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592158 python3.9[182624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:43 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:43 np0005592158 python3.9[182776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:44.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:44 np0005592158 python3.9[182928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:44 np0005592158 python3.9[183080]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:45 np0005592158 python3.9[183232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:46 np0005592158 python3.9[183384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:46 np0005592158 python3.9[183536]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.423 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:50:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:50:47 np0005592158 python3.9[183688]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:49 np0005592158 python3.9[183840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:49 np0005592158 podman[183935]: 2026-01-22 13:50:49.635734047 +0000 UTC m=+0.056932656 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 08:50:49 np0005592158 python3.9[183983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089848.69712-2310-91455059603633/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:50.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:50 np0005592158 ceph-mds[83358]: mds.beacon.cephfs.compute-1.ofmmzj missed beacon ack from the monitors
Jan 22 08:50:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:50 np0005592158 python3.9[184135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:51 np0005592158 python3.9[184258]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089849.9758122-2310-214652688515803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:51 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:51 np0005592158 python3.9[184410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:52 np0005592158 python3.9[184533]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089851.2439692-2310-158632212388995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:53 np0005592158 python3.9[184735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 python3.9[184858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089852.5605118-2310-151784308468463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:54.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:54 np0005592158 python3.9[185010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:55 np0005592158 python3.9[185133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089853.9173703-2310-122207087521132/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:55 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:55 np0005592158 python3.9[185285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:50:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:50:56 np0005592158 python3.9[185408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089855.2714481-2310-56696901510138/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592158 python3.9[185560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:57 np0005592158 python3.9[185683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089856.543057-2310-18921804727397/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.106774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858106840, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1745, "num_deletes": 252, "total_data_size": 3614054, "memory_usage": 3669136, "flush_reason": "Manual Compaction"}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858119798, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1454881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15707, "largest_seqno": 17447, "table_properties": {"data_size": 1449261, "index_size": 2567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17004, "raw_average_key_size": 21, "raw_value_size": 1435883, "raw_average_value_size": 1831, "num_data_blocks": 112, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089723, "oldest_key_time": 1769089723, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13075 microseconds, and 5612 cpu microseconds.
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119858) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1454881 bytes OK
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119882) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121289) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121305) EVENT_LOG_v1 {"time_micros": 1769089858121299, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121324) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3605715, prev total WAL file size 3605715, number of live WAL files 2.
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.122478) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1420KB)], [27(9797KB)]
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858122536, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 11487446, "oldest_snapshot_seqno": -1}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5352 keys, 8490400 bytes, temperature: kUnknown
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858179745, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8490400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8455832, "index_size": 20058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133897, "raw_average_key_size": 25, "raw_value_size": 8359955, "raw_average_value_size": 1562, "num_data_blocks": 828, "num_entries": 5352, "num_filter_entries": 5352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.180065) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8490400 bytes
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.181965) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.3 rd, 148.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.7) write-amplify(5.8) OK, records in: 5810, records dropped: 458 output_compression: NoCompression
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.182021) EVENT_LOG_v1 {"time_micros": 1769089858182000, "job": 14, "event": "compaction_finished", "compaction_time_micros": 57351, "compaction_time_cpu_micros": 22014, "output_level": 6, "num_output_files": 1, "total_output_size": 8490400, "num_input_records": 5810, "num_output_records": 5352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858183216, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858185151, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.122410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:50:58.185316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592158 python3.9[185835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:50:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:58 np0005592158 python3.9[185958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089857.7539034-2310-90855280133990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:59 np0005592158 python3.9[186110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:00 np0005592158 python3.9[186233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089859.154633-2310-39767403656276/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:01 np0005592158 python3.9[186385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:01 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:01 np0005592158 python3.9[186508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089860.4921417-2310-94909633582820/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:02 np0005592158 python3.9[186660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:02 np0005592158 python3.9[186783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089861.9539995-2310-250795205863411/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:03 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:03 np0005592158 python3.9[186935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:04.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:04 np0005592158 python3.9[187058]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089863.1165297-2310-40228163846909/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:04 np0005592158 podman[187182]: 2026-01-22 13:51:04.76095552 +0000 UTC m=+0.115575327 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 08:51:04 np0005592158 python3.9[187227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:05 np0005592158 python3.9[187359]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089864.417885-2310-88465903192870/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:06 np0005592158 python3.9[187511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:06 np0005592158 python3.9[187634]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089865.5973155-2310-18496025185074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:07 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:08 np0005592158 python3.9[187784]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:08 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:09 np0005592158 python3.9[187939]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 08:51:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:10 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:51:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:12.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:12 np0005592158 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 08:51:13 np0005592158 python3.9[188095]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592158 python3.9[188247]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:14.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:14 np0005592158 python3.9[188399]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:14 np0005592158 python3.9[188551]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:16.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:16 np0005592158 python3.9[188703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:16 np0005592158 python3.9[188855]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592158 python3.9[189007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:18 np0005592158 python3.9[189159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:18 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:18 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:19 np0005592158 python3.9[189311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:19 np0005592158 python3.9[189463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592158 podman[189464]: 2026-01-22 13:51:19.876150851 +0000 UTC m=+0.083862047 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 08:51:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:20.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:20 np0005592158 python3.9[189635]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:21 np0005592158 systemd[1]: Reloading.
Jan 22 08:51:21 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:21 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:21 np0005592158 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 08:51:21 np0005592158 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 08:51:21 np0005592158 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 08:51:21 np0005592158 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 08:51:21 np0005592158 systemd[1]: Starting libvirt logging daemon...
Jan 22 08:51:21 np0005592158 systemd[1]: Started libvirt logging daemon.
Jan 22 08:51:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:22.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:22 np0005592158 python3.9[189828]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:22 np0005592158 systemd[1]: Reloading.
Jan 22 08:51:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:22 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:22 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:22 np0005592158 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 08:51:22 np0005592158 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 08:51:22 np0005592158 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 08:51:22 np0005592158 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 08:51:22 np0005592158 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 08:51:22 np0005592158 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 08:51:22 np0005592158 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:51:22 np0005592158 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:51:23 np0005592158 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 08:51:23 np0005592158 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 08:51:23 np0005592158 python3.9[190045]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:23 np0005592158 systemd[1]: Reloading.
Jan 22 08:51:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:23 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:23 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:23 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:23 np0005592158 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 08:51:23 np0005592158 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 08:51:23 np0005592158 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 08:51:23 np0005592158 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 08:51:23 np0005592158 systemd[1]: Starting libvirt proxy daemon...
Jan 22 08:51:23 np0005592158 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 08:51:23 np0005592158 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 08:51:23 np0005592158 systemd[1]: Started libvirt proxy daemon.
Jan 22 08:51:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:24.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:24 np0005592158 python3.9[190266]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:24 np0005592158 systemd[1]: Reloading.
Jan 22 08:51:24 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:24 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:24 np0005592158 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78ef7930-963d-408a-ac09-8b3721c30352
Jan 22 08:51:24 np0005592158 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592158 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78ef7930-963d-408a-ac09-8b3721c30352
Jan 22 08:51:24 np0005592158 setroubleshoot[190016]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592158 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 08:51:24 np0005592158 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 08:51:24 np0005592158 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 08:51:25 np0005592158 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 08:51:25 np0005592158 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 08:51:25 np0005592158 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 08:51:25 np0005592158 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 08:51:25 np0005592158 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 08:51:25 np0005592158 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 08:51:25 np0005592158 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 08:51:25 np0005592158 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:51:25 np0005592158 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:51:25 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:25 np0005592158 python3.9[190482]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:25 np0005592158 systemd[1]: Reloading.
Jan 22 08:51:26 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:26 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:26 np0005592158 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 08:51:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:26 np0005592158 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 08:51:26 np0005592158 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 08:51:26 np0005592158 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 08:51:26 np0005592158 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 08:51:26 np0005592158 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 08:51:26 np0005592158 systemd[1]: Starting libvirt secret daemon...
Jan 22 08:51:26 np0005592158 systemd[1]: Started libvirt secret daemon.
Jan 22 08:51:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:27 np0005592158 python3.9[190694]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:28.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:28 np0005592158 python3.9[190846]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:28 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:28 np0005592158 python3.9[190998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:30.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:30 np0005592158 python3.9[191152]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592158 python3.9[191302]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.625222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891625324, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 251, "total_data_size": 922151, "memory_usage": 934712, "flush_reason": "Manual Compaction"}
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 22 08:51:31 np0005592158 python3.9[191423]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089890.7701848-3384-195326700602517/.source.xml follow=False _original_basename=secret.xml.j2 checksum=661e779e9ad9ab9796e6f7af12c5e6a2862cccb5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891871155, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 605960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17452, "largest_seqno": 18090, "table_properties": {"data_size": 602925, "index_size": 943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 19, "raw_value_size": 596493, "raw_average_value_size": 1469, "num_data_blocks": 42, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089858, "oldest_key_time": 1769089858, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 245947 microseconds, and 3997 cpu microseconds.
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.871207) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 605960 bytes OK
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.871228) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877130) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877175) EVENT_LOG_v1 {"time_micros": 1769089891877165, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.877247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 918515, prev total WAL file size 918515, number of live WAL files 2.
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.878137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(591KB)], [30(8291KB)]
Jan 22 08:51:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891878227, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9096360, "oldest_snapshot_seqno": -1}
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 5247 keys, 7419371 bytes, temperature: kUnknown
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892016701, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 7419371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7386391, "index_size": 18790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132589, "raw_average_key_size": 25, "raw_value_size": 7292995, "raw_average_value_size": 1389, "num_data_blocks": 771, "num_entries": 5247, "num_filter_entries": 5247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.017023) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7419371 bytes
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.033169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.6 rd, 53.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.2) OK, records in: 5758, records dropped: 511 output_compression: NoCompression
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.033215) EVENT_LOG_v1 {"time_micros": 1769089892033197, "job": 16, "event": "compaction_finished", "compaction_time_micros": 138574, "compaction_time_cpu_micros": 21703, "output_level": 6, "num_output_files": 1, "total_output_size": 7419371, "num_input_records": 5758, "num_output_records": 5247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892034171, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089892037720, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:31.878026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:51:32.037829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:32.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:32 np0005592158 python3.9[191575]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 088fe176-0106-5401-803c-2da38b73b76a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:32 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592158 python3.9[191737]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:34 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:34 np0005592158 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 08:51:34 np0005592158 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.043s CPU time.
Jan 22 08:51:34 np0005592158 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 08:51:35 np0005592158 podman[192049]: 2026-01-22 13:51:35.062889092 +0000 UTC m=+0.091935056 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 08:51:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:35 np0005592158 python3.9[192226]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:36.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:36 np0005592158 python3.9[192378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:37 np0005592158 python3.9[192501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089896.1780925-3549-108645698587377/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:38 np0005592158 python3.9[192653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:51:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:51:39 np0005592158 python3.9[192805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:39 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:39 np0005592158 python3.9[192883]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:40.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:40.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:40 np0005592158 python3.9[193035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:40 np0005592158 python3.9[193113]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7qojvd0v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:41 np0005592158 python3.9[193265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:42.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:42 np0005592158 python3.9[193343]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:42 np0005592158 python3.9[193495]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:43 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:44 np0005592158 python3[193648]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:51:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:44 np0005592158 python3.9[193800]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:45 np0005592158 python3.9[193879]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:46 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:46 np0005592158 python3.9[194031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:46 np0005592158 python3.9[194156]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089905.5316596-3816-75644588008420/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.424 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.426 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:51:47.426 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:51:47 np0005592158 python3.9[194308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:47 np0005592158 python3.9[194386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:48 np0005592158 python3.9[194538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:49 np0005592158 python3.9[194616]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:50 np0005592158 podman[194769]: 2026-01-22 13:51:50.084786471 +0000 UTC m=+0.067413970 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 08:51:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:50.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:50 np0005592158 python3.9[194768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:50 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592158 auditd[704]: Audit daemon rotating log files
Jan 22 08:51:51 np0005592158 python3.9[194913]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089909.5097935-3933-51116767838228/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:52 np0005592158 python3.9[195065]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:52 np0005592158 python3.9[195217]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:53 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:53 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:54 np0005592158 python3.9[195502]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:54 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:55 np0005592158 python3.9[195654]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:55 np0005592158 python3.9[195807]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:51:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:51:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:51:56 np0005592158 python3.9[195961]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:58 np0005592158 python3.9[196116]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:51:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:51:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:51:58 np0005592158 python3.9[196268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:59 np0005592158 python3.9[196391]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089918.2726147-4149-210332659786780/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:00 np0005592158 python3.9[196543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:00 np0005592158 python3.9[196666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089919.7043467-4194-201113204815382/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:52:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:52:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:03 np0005592158 python3.9[196818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:03 np0005592158 python3.9[196941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089921.278591-4239-263544070791140/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:04 np0005592158 python3.9[197093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:04 np0005592158 systemd[1]: Reloading.
Jan 22 08:52:04 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:04 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:05 np0005592158 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 08:52:05 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:06 np0005592158 podman[197232]: 2026-01-22 13:52:06.130916729 +0000 UTC m=+0.111988069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 08:52:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:06 np0005592158 python3.9[197309]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:52:06 np0005592158 systemd[1]: Reloading.
Jan 22 08:52:06 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:06 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:08 np0005592158 systemd[1]: Reloading.
Jan 22 08:52:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:08 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:08 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:09 np0005592158 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 08:52:09 np0005592158 systemd[1]: session-48.scope: Consumed 3min 34.813s CPU time.
Jan 22 08:52:09 np0005592158 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Jan 22 08:52:09 np0005592158 systemd-logind[787]: Removed session 48.
Jan 22 08:52:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 08:52:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 08:52:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:11 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:12.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:14.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:14 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:15 np0005592158 systemd-logind[787]: New session 49 of user zuul.
Jan 22 08:52:15 np0005592158 systemd[1]: Started Session 49 of User zuul.
Jan 22 08:52:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:15 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:16.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:16 np0005592158 python3.9[197610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:52:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:16 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:17 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:17 np0005592158 python3.9[197764]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:17 np0005592158 network[197781]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:17 np0005592158 network[197782]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:17 np0005592158 network[197783]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:18.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.670469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938670555, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 760, "num_deletes": 250, "total_data_size": 1333262, "memory_usage": 1355048, "flush_reason": "Manual Compaction"}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938691238, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 868137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18095, "largest_seqno": 18850, "table_properties": {"data_size": 864527, "index_size": 1390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8225, "raw_average_key_size": 17, "raw_value_size": 856944, "raw_average_value_size": 1862, "num_data_blocks": 61, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089892, "oldest_key_time": 1769089892, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 20821 microseconds, and 4157 cpu microseconds.
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.691296) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 868137 bytes OK
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.691325) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694007) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694061) EVENT_LOG_v1 {"time_micros": 1769089938694048, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.694089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1329084, prev total WAL file size 1345479, number of live WAL files 2.
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.695317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(847KB)], [33(7245KB)]
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938695452, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8287508, "oldest_snapshot_seqno": -1}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5195 keys, 7744323 bytes, temperature: kUnknown
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938942087, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7744323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711336, "index_size": 18925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133631, "raw_average_key_size": 25, "raw_value_size": 7618468, "raw_average_value_size": 1466, "num_data_blocks": 757, "num_entries": 5195, "num_filter_entries": 5195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.942428) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7744323 bytes
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.948459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.6 rd, 31.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(18.5) write-amplify(8.9) OK, records in: 5707, records dropped: 512 output_compression: NoCompression
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.948498) EVENT_LOG_v1 {"time_micros": 1769089938948483, "job": 18, "event": "compaction_finished", "compaction_time_micros": 246746, "compaction_time_cpu_micros": 19853, "output_level": 6, "num_output_files": 1, "total_output_size": 7744323, "num_input_records": 5707, "num_output_records": 5195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938948830, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938950261, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.695052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:52:18.950310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:19 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:20.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:20 np0005592158 podman[197863]: 2026-01-22 13:52:20.199531662 +0000 UTC m=+0.068990183 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:52:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:22 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:23 np0005592158 python3.9[198074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:52:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:24 np0005592158 python3.9[198158]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:28.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:28 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:28.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:29 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:29 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:30.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:30 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592158 python3.9[198311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:32 np0005592158 python3.9[198463]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592158 python3.9[198616]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:33 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592158 python3.9[198768]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:34 np0005592158 python3.9[198921]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:35 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:35 np0005592158 python3.9[199044]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089954.2326922-246-234136023076544/.source.iscsi _original_basename=.z1svhdm_ follow=False checksum=c04402da62a45aeb02eef40454c1ebe55b259f0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:36 np0005592158 podman[199196]: 2026-01-22 13:52:36.312536197 +0000 UTC m=+0.094393663 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 22 08:52:36 np0005592158 python3.9[199197]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:52:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:52:36 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:37 np0005592158 python3.9[199374]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:38 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592158 python3.9[199526]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:39 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:39 np0005592158 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 08:52:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:40.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:40 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:40 np0005592158 python3.9[199682]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:40 np0005592158 systemd[1]: Reloading.
Jan 22 08:52:41 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:41 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:41 np0005592158 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:52:41 np0005592158 systemd[1]: Starting Open-iSCSI...
Jan 22 08:52:41 np0005592158 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 08:52:41 np0005592158 systemd[1]: Started Open-iSCSI.
Jan 22 08:52:41 np0005592158 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 08:52:41 np0005592158 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 08:52:41 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:42.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:52:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:52:42 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592158 python3.9[199880]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:42 np0005592158 network[199897]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:42 np0005592158 network[199898]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:42 np0005592158 network[199899]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:43 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:44 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:45 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:46.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:47 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.425 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:52:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:52:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:48 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:48 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:49 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:50 np0005592158 python3.9[200171]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:51 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:51 np0005592158 podman[200173]: 2026-01-22 13:52:51.106016072 +0000 UTC m=+0.084650865 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 08:52:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:52.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:52 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:52:52 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:52:52 np0005592158 systemd[1]: Reloading.
Jan 22 08:52:52 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:53 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:53 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:52:53 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:53 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:52:53 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:52:53 np0005592158 systemd[1]: run-re352d2953b404036b3ee02486a8957de.service: Deactivated successfully.
Jan 22 08:52:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:54 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:54 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:55 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:56 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:57 np0005592158 python3.9[200504]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:52:57 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:58 np0005592158 python3.9[200656]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 08:52:58 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:52:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:52:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:58 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:58 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592158 python3.9[200813]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:59 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:00 np0005592158 python3.9[200936]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089978.8205345-510-159001209055462/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:00 np0005592158 python3.9[201088]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:01 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:02 np0005592158 python3.9[201240]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:02 np0005592158 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:02 np0005592158 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:02 np0005592158 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:02 np0005592158 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:02 np0005592158 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:02.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:02 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592158 python3.9[201396]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:04 np0005592158 python3.9[201549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592158 python3.9[201701]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:05 np0005592158 python3.9[201824]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089984.8463316-663-265881234612033/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:07 np0005592158 podman[201948]: 2026-01-22 13:53:07.020679791 +0000 UTC m=+0.103632854 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:53:07 np0005592158 python3.9[201990]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592158 python3.9[202155]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:08.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:08.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:08 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:08 np0005592158 python3.9[202307]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:09 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:09 np0005592158 python3.9[202459]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:10 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:10 np0005592158 python3.9[202611]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:11 np0005592158 python3.9[202763]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:11 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:12 np0005592158 python3.9[202915]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592158 python3.9[203067]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:13 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:13 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:14 np0005592158 python3.9[203219]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:14 np0005592158 podman[203418]: 2026-01-22 13:53:14.660461994 +0000 UTC m=+0.064049171 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 08:53:14 np0005592158 podman[203418]: 2026-01-22 13:53:14.757040984 +0000 UTC m=+0.160628161 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:53:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:15 np0005592158 python3.9[203635]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:16 np0005592158 python3.9[203952]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:16 np0005592158 systemd[1]: Listening on multipathd control socket.
Jan 22 08:53:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:17 np0005592158 python3.9[204108]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:17 np0005592158 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 08:53:17 np0005592158 udevadm[204113]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 08:53:17 np0005592158 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 08:53:17 np0005592158 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:17 np0005592158 multipathd[204116]: --------start up--------
Jan 22 08:53:17 np0005592158 multipathd[204116]: read /etc/multipath.conf
Jan 22 08:53:17 np0005592158 multipathd[204116]: path checkers start up
Jan 22 08:53:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:53:17 np0005592158 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:53:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:18 np0005592158 python3.9[204275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:53:19 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:19 np0005592158 python3.9[204427]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 08:53:19 np0005592158 kernel: Key type psk registered
Jan 22 08:53:20 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:20.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:20 np0005592158 python3.9[204590]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:21 np0005592158 podman[204713]: 2026-01-22 13:53:21.273886674 +0000 UTC m=+0.060692310 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:53:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:21 np0005592158 python3.9[204714]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769090000.20843-1053-212470424857449/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:22 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:22 np0005592158 python3.9[204884]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:22.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:22 np0005592158 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 08:53:23 np0005592158 python3.9[205037]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:23 np0005592158 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:23 np0005592158 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:23 np0005592158 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:23 np0005592158 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:23 np0005592158 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:23 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592158 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 08:53:24 np0005592158 python3.9[205244]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:53:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:26 np0005592158 systemd[1]: Reloading.
Jan 22 08:53:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:26 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:26 np0005592158 systemd[1]: Reloading.
Jan 22 08:53:27 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:27 np0005592158 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 08:53:27 np0005592158 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 08:53:27 np0005592158 lvm[205360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:53:27 np0005592158 lvm[205360]: VG ceph_vg0 finished
Jan 22 08:53:27 np0005592158 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:53:27 np0005592158 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:53:27 np0005592158 systemd[1]: Reloading.
Jan 22 08:53:27 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:27 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:27 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592158 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:53:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:28 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:28 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:28 np0005592158 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:53:28 np0005592158 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:53:28 np0005592158 systemd[1]: man-db-cache-update.service: Consumed 1.631s CPU time.
Jan 22 08:53:29 np0005592158 systemd[1]: run-r11a9d633f44c428092a4f53412932160.service: Deactivated successfully.
Jan 22 08:53:29 np0005592158 python3.9[206710]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:29 np0005592158 systemd[1]: Stopping Open-iSCSI...
Jan 22 08:53:29 np0005592158 iscsid[199722]: iscsid shutting down.
Jan 22 08:53:29 np0005592158 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 08:53:29 np0005592158 systemd[1]: Stopped Open-iSCSI.
Jan 22 08:53:29 np0005592158 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:53:29 np0005592158 systemd[1]: Starting Open-iSCSI...
Jan 22 08:53:29 np0005592158 systemd[1]: Started Open-iSCSI.
Jan 22 08:53:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:30 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:30 np0005592158 python3.9[206866]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:30 np0005592158 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 08:53:30 np0005592158 multipathd[204116]: exit (signal)
Jan 22 08:53:30 np0005592158 multipathd[204116]: --------shut down-------
Jan 22 08:53:30 np0005592158 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 08:53:30 np0005592158 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 08:53:30 np0005592158 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592158 multipathd[206872]: --------start up--------
Jan 22 08:53:31 np0005592158 multipathd[206872]: read /etc/multipath.conf
Jan 22 08:53:31 np0005592158 multipathd[206872]: path checkers start up
Jan 22 08:53:31 np0005592158 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:31 np0005592158 python3.9[207029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:53:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:32.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:33 np0005592158 python3.9[207185]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:34 np0005592158 python3.9[207337]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:53:34 np0005592158 systemd[1]: Reloading.
Jan 22 08:53:34 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:34 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:35 np0005592158 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 08:53:35 np0005592158 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 08:53:35 np0005592158 python3.9[207524]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:53:35 np0005592158 network[207541]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:53:35 np0005592158 network[207542]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:53:35 np0005592158 network[207543]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:53:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:36 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:37 np0005592158 podman[207574]: 2026-01-22 13:53:37.21496307 +0000 UTC m=+0.137708881 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 22 08:53:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:39 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:40 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:42 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:42.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:42 np0005592158 python3.9[207840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:43 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:43 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:43 np0005592158 python3.9[207993]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:44 np0005592158 python3.9[208146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:45 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:45 np0005592158 python3.9[208299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:46 np0005592158 python3.9[208452]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:47 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592158 python3.9[208605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:53:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:53:48 np0005592158 python3.9[208758]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:48 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:48 np0005592158 python3.9[208911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:49 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:49 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:50 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:50.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:51 np0005592158 python3.9[209064]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:51 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:51 np0005592158 podman[209188]: 2026-01-22 13:53:51.617458442 +0000 UTC m=+0.060234514 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 08:53:51 np0005592158 python3.9[209233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:52 np0005592158 python3.9[209388]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:53 np0005592158 python3.9[209540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:53 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:53 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:53 np0005592158 python3.9[209692]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:54 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:54 np0005592158 python3.9[209844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:55 np0005592158 python3.9[209996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:55 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:55 np0005592158 python3.9[210148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:56.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:57 np0005592158 python3.9[210300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:58 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:58 np0005592158 python3.9[210452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:58.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:53:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:58 np0005592158 python3.9[210604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:59 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:59 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592158 python3.9[210756]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:00 np0005592158 python3.9[210908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:00 np0005592158 python3.9[211060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:01 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:01 np0005592158 python3.9[211212]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592158 python3.9[211364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:03 np0005592158 python3.9[211516]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:04 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:04 np0005592158 python3.9[211668]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:54:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:05 np0005592158 python3.9[211820]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:54:05 np0005592158 systemd[1]: Reloading.
Jan 22 08:54:06 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:54:06 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:54:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:07 np0005592158 python3.9[212008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:07 np0005592158 podman[212133]: 2026-01-22 13:54:07.70505362 +0000 UTC m=+0.095939984 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:54:07 np0005592158 python3.9[212180]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:08 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:08 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:08 np0005592158 python3.9[212340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:09 np0005592158 python3.9[212493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:09 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:09 np0005592158 python3.9[212646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:10 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:11 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:11 np0005592158 python3.9[212799]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:12 np0005592158 python3.9[212952]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:54:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:54:12 np0005592158 python3.9[213105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.327756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053327820, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1667, "num_deletes": 256, "total_data_size": 3216962, "memory_usage": 3274680, "flush_reason": "Manual Compaction"}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053341928, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2115001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18855, "largest_seqno": 20517, "table_properties": {"data_size": 2108467, "index_size": 3414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16221, "raw_average_key_size": 20, "raw_value_size": 2094087, "raw_average_value_size": 2620, "num_data_blocks": 150, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089938, "oldest_key_time": 1769089938, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 14237 microseconds, and 5823 cpu microseconds.
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342002) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2115001 bytes OK
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342026) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343459) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343475) EVENT_LOG_v1 {"time_micros": 1769090053343469, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343495) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3209033, prev total WAL file size 3209033, number of live WAL files 2.
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2065KB)], [36(7562KB)]
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053344366, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 9859324, "oldest_snapshot_seqno": -1}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5467 keys, 9664481 bytes, temperature: kUnknown
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053405954, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9664481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9628150, "index_size": 21565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141077, "raw_average_key_size": 25, "raw_value_size": 9528863, "raw_average_value_size": 1742, "num_data_blocks": 864, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.406333) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9664481 bytes
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.408008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 156.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 5994, records dropped: 527 output_compression: NoCompression
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.408031) EVENT_LOG_v1 {"time_micros": 1769090053408019, "job": 20, "event": "compaction_finished", "compaction_time_micros": 61714, "compaction_time_cpu_micros": 22186, "output_level": 6, "num_output_files": 1, "total_output_size": 9664481, "num_input_records": 5994, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053408590, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053410513, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:13.410570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:13 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:14.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:14 np0005592158 python3.9[213258]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:15 np0005592158 python3.9[213410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:15 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592158 python3.9[213562]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:16.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:16 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:17 np0005592158 python3.9[213714]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:17 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:17 np0005592158 python3.9[213866]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:18 np0005592158 python3.9[214018]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:18.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:18 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:18 np0005592158 python3.9[214170]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:19 np0005592158 python3.9[214322]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:20 np0005592158 python3.9[214474]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:20.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:20 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:20 np0005592158 python3.9[214626]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:22 np0005592158 podman[214651]: 2026-01-22 13:54:22.062925456 +0000 UTC m=+0.054315662 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 08:54:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:22 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:24.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:54:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:54:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:26.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:27 np0005592158 python3.9[214928]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 08:54:27 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:28.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:28 np0005592158 python3.9[215081]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:54:28 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:29 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:30 np0005592158 python3.9[215239]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:54:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:30.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:30 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:31 np0005592158 systemd-logind[787]: New session 50 of user zuul.
Jan 22 08:54:31 np0005592158 systemd[1]: Started Session 50 of User zuul.
Jan 22 08:54:31 np0005592158 systemd[1]: session-50.scope: Deactivated successfully.
Jan 22 08:54:31 np0005592158 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Jan 22 08:54:31 np0005592158 systemd-logind[787]: Removed session 50.
Jan 22 08:54:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:32.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:32 np0005592158 python3.9[215425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:32.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.934020) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072934082, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 547, "num_deletes": 251, "total_data_size": 649288, "memory_usage": 660448, "flush_reason": "Manual Compaction"}
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072938941, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20522, "largest_seqno": 21064, "table_properties": {"data_size": 413181, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7315, "raw_average_key_size": 19, "raw_value_size": 407344, "raw_average_value_size": 1092, "num_data_blocks": 33, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090053, "oldest_key_time": 1769090053, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4962 microseconds, and 1902 cpu microseconds.
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.938979) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 415944 bytes OK
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.939008) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.940992) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941012) EVENT_LOG_v1 {"time_micros": 1769090072941006, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941033) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 646048, prev total WAL file size 646048, number of live WAL files 2.
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(406KB)], [39(9437KB)]
Jan 22 08:54:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072941697, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 10080425, "oldest_snapshot_seqno": -1}
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5325 keys, 8372498 bytes, temperature: kUnknown
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073006212, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8372498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338056, "index_size": 19996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 138930, "raw_average_key_size": 26, "raw_value_size": 8242044, "raw_average_value_size": 1547, "num_data_blocks": 796, "num_entries": 5325, "num_filter_entries": 5325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.006530) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8372498 bytes
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.008221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.0 rd, 129.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(44.4) write-amplify(20.1) OK, records in: 5840, records dropped: 515 output_compression: NoCompression
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.008247) EVENT_LOG_v1 {"time_micros": 1769090073008235, "job": 22, "event": "compaction_finished", "compaction_time_micros": 64614, "compaction_time_cpu_micros": 20758, "output_level": 6, "num_output_files": 1, "total_output_size": 8372498, "num_input_records": 5840, "num_output_records": 5325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073008468, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090073010452, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:32.941537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-13:54:33.010624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592158 python3.9[215546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090072.1129656-2660-18250776302670/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:33 np0005592158 python3.9[215746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:34 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:34 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:34 np0005592158 python3.9[215822]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:34.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:54:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.5 total, 600.0 interval#012Cumulative writes: 6587 writes, 26K keys, 6587 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6587 writes, 1237 syncs, 5.32 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 560 writes, 844 keys, 560 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 560 writes, 276 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 22 08:54:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:34 np0005592158 python3.9[215972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:35 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:35 np0005592158 python3.9[216093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090074.3972583-2660-63576531896094/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:36 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:36 np0005592158 python3.9[216243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:36.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:36 np0005592158 python3.9[216364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090075.664014-2660-22902125891366/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:37 np0005592158 python3.9[216514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:37 np0005592158 python3.9[216635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090076.8212724-2660-19267187141685/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:37 np0005592158 podman[216636]: 2026-01-22 13:54:37.96430828 +0000 UTC m=+0.094722350 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:54:38 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:38 np0005592158 python3.9[216812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:39 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:39 np0005592158 python3.9[216933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090078.0048456-2660-203431774405702/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:40 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:40 np0005592158 python3.9[217085]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:41 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:41 np0005592158 python3.9[217237]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:42 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:42 np0005592158 python3.9[217389]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:43 np0005592158 python3.9[217541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:43 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:43 np0005592158 python3.9[217664]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769090082.6020703-2982-100454722767971/.source _original_basename=.n8ce4_a6 follow=False checksum=bf1e2aecb466d047605f32ca3ded8b7745e19a70 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 08:54:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:44.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:44 np0005592158 python3.9[217816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:45 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:45 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:45 np0005592158 python3.9[217968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:46 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592158 python3.9[218089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090085.216663-3059-197895201000764/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:47 np0005592158 python3.9[218239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.427 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:54:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:54:47 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:47 np0005592158 python3.9[218360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090086.5527017-3104-173864243702517/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:48 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:48 np0005592158 python3.9[218512]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 08:54:49 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592158 python3.9[218664]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:54:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:50 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:51 np0005592158 python3[218816]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:54:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:53 np0005592158 podman[218848]: 2026-01-22 13:54:53.104788351 +0000 UTC m=+0.092160981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:54:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:54 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:54 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:55 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:57 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:57 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:54:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:58.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:02.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:02.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592158 podman[218829]: 2026-01-22 13:55:03.83936425 +0000 UTC m=+12.421506675 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:03 np0005592158 podman[218969]: 2026-01-22 13:55:03.992076122 +0000 UTC m=+0.050711253 container create 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Jan 22 08:55:03 np0005592158 podman[218969]: 2026-01-22 13:55:03.962581772 +0000 UTC m=+0.021216923 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:03 np0005592158 python3[218816]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 08:55:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:04.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:04 np0005592158 python3.9[219160]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:55:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:55:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:06 np0005592158 python3.9[219314]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 08:55:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:06.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:07 np0005592158 python3.9[219466]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:55:08 np0005592158 podman[219570]: 2026-01-22 13:55:08.140889874 +0000 UTC m=+0.128142388 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 08:55:08 np0005592158 python3[219640]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:55:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:08 np0005592158 podman[219681]: 2026-01-22 13:55:08.547553177 +0000 UTC m=+0.026283683 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:55:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:55:08 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:08 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:08 np0005592158 podman[219681]: 2026-01-22 13:55:08.916888986 +0000 UTC m=+0.395619462 container create 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 08:55:08 np0005592158 python3[219640]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 22 08:55:09 np0005592158 python3.9[219870]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:09 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:10 np0005592158 python3.9[220024]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:11 np0005592158 python3.9[220175]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769090110.926452-3392-104437649725952/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:11 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:12 np0005592158 python3.9[220251]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:55:12 np0005592158 systemd[1]: Reloading.
Jan 22 08:55:12 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:12 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:13 np0005592158 python3.9[220361]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:55:13 np0005592158 systemd[1]: Reloading.
Jan 22 08:55:13 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:13 np0005592158 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:13 np0005592158 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:13 np0005592158 systemd[1]: Starting nova_compute container...
Jan 22 08:55:13 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:55:13 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592158 podman[220400]: 2026-01-22 13:55:13.544706178 +0000 UTC m=+0.099369398 container init 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 08:55:13 np0005592158 podman[220400]: 2026-01-22 13:55:13.551528946 +0000 UTC m=+0.106192156 container start 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 08:55:13 np0005592158 podman[220400]: nova_compute
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + sudo -E kolla_set_configs
Jan 22 08:55:13 np0005592158 systemd[1]: Started nova_compute container.
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Validating config file
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying service configuration files
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Writing out command to execute
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:13 np0005592158 nova_compute[220416]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:13 np0005592158 nova_compute[220416]: ++ cat /run_command
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + CMD=nova-compute
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + ARGS=
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + sudo kolla_copy_cacerts
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + [[ ! -n '' ]]
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + . kolla_extend_start
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:13 np0005592158 nova_compute[220416]: Running command: 'nova-compute'
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + umask 0022
Jan 22 08:55:13 np0005592158 nova_compute[220416]: + exec nova-compute
Jan 22 08:55:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:14.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:15 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592158 python3.9[220578]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.145 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.146 220420 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:16 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:16 np0005592158 python3.9[220730]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.324 220420 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.342 220420 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.343 220420 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:16 np0005592158 nova_compute[220416]: 2026-01-22 13:55:16.997 220420 INFO nova.virt.driver [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.191 220420 INFO nova.compute.provider_config [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.207 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.208 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.208 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.209 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.210 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.211 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.212 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 python3.9[220882]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.213 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.214 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.215 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.216 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.217 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.218 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.219 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.220 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.221 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.222 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.223 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.224 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.225 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.226 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.227 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.228 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.229 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.230 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.231 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.232 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.233 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.234 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.235 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.236 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.237 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.238 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.239 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.240 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.241 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.242 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.243 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.244 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.245 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.246 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.247 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.248 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.249 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.250 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.251 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.252 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.253 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.254 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.255 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.256 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.257 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.258 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.259 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.260 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.261 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.262 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.263 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.264 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.265 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.266 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.267 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.268 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.269 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.270 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.271 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.272 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.273 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.274 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.275 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.276 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.277 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.278 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.279 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.280 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.281 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.282 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.283 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.284 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.285 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.286 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.287 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.288 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.289 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.290 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.291 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 WARNING oslo_config.cfg [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:17 np0005592158 nova_compute[220416]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:17 np0005592158 nova_compute[220416]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:17 np0005592158 nova_compute[220416]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:17 np0005592158 nova_compute[220416]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.292 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.293 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.294 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.295 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.296 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.297 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.298 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.299 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.300 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.301 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.302 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.303 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.304 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.305 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.306 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.307 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.308 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.309 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.310 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.311 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.312 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.313 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.314 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.315 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.316 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.317 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.318 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.319 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.320 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.321 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.322 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.323 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.324 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.325 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.326 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.327 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.328 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.329 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.330 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.331 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.332 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.333 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.334 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.335 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.336 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.337 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.338 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.339 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.340 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.341 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.342 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.343 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.344 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.345 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.346 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.347 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.348 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.349 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.350 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.351 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.352 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.353 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.354 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.355 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.356 220420 DEBUG oslo_service.service [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.358 220420 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.373 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:17 np0005592158 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:55:17 np0005592158 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.451 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb979b03460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.453 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb979b03460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.455 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.496 220420 WARNING nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:17 np0005592158 nova_compute[220416]: 2026-01-22 13:55:17.496 220420 DEBUG nova.virt.libvirt.volume.mount [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.275 220420 INFO nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <host>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <uuid>2198fae5-1aa3-4940-83f6-677ed40734bb</uuid>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <arch>x86_64</arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <microcode version='16777317'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='x2apic'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='tsc-deadline'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='osxsave'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='hypervisor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='tsc_adjust'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='spec-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='stibp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='arch-capabilities'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='cmp_legacy'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='topoext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='virt-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='lbrv'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='tsc-scale'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='vmcb-clean'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='pause-filter'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='pfthreshold'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='rdctl-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='mds-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <power_management>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <suspend_mem/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </power_management>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <iommu support='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <migration_features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <live/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <uri_transports>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </uri_transports>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </migration_features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <topology>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <cells num='1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <cell id='0'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <distances>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <sibling id='0' value='10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          </distances>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          <cpus num='8'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:          </cpus>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        </cell>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </cells>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </topology>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <cache>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </cache>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <secmodel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model>selinux</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <doi>0</doi>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </secmodel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <secmodel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model>dac</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <doi>0</doi>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </secmodel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </host>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <guest>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <arch name='i686'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <wordsize>32</wordsize>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <pae/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <nonpae/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <cpuselection/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <deviceboot/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </guest>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <guest>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <arch name='x86_64'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <wordsize>64</wordsize>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <cpuselection/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <deviceboot/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </guest>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </capabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: #033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.283 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.303 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:18 np0005592158 nova_compute[220416]: <domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <os supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>rom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pflash</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>yes</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='secure'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </loader>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </os>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>memfd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </memoryBacking>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>disk</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>floppy</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>lun</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>fdc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>sata</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </disk>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vnc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </graphics>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <video supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vga</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>none</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>bochs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </video>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='mode'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>requisite</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>optional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pci</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hostdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>random</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </rng>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>path</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>handle</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </filesystem>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emulator</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>external</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>2.0</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </tpm>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </redirdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </channel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </crypto>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>passt</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </interface>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>isa</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </panic>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <console supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>null</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dev</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pipe</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stdio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>udp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tcp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </console>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='features'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vapic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>runtime</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>synic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stimer</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reset</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ipi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>avic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hyperv>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.313 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:18 np0005592158 nova_compute[220416]: <domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <os supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>rom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pflash</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>yes</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='secure'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </loader>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </os>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>memfd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </memoryBacking>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>disk</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>floppy</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>lun</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ide</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>fdc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>sata</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </disk>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vnc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </graphics>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <video supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vga</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>none</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>bochs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </video>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='mode'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>requisite</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>optional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pci</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hostdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>random</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </rng>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>path</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>handle</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </filesystem>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emulator</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>external</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>2.0</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </tpm>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </redirdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </channel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </crypto>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>passt</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </interface>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>isa</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </panic>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <console supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>null</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dev</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pipe</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stdio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>udp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tcp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </console>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='features'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vapic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>runtime</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>synic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stimer</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reset</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ipi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>avic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hyperv>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.369 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.374 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:18 np0005592158 nova_compute[220416]: <domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <os supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='firmware'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>efi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>rom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pflash</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>yes</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='secure'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>yes</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </loader>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </os>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>memfd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </memoryBacking>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>disk</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>floppy</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>lun</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>fdc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>sata</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </disk>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vnc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </graphics>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <video supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vga</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>none</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>bochs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </video>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='mode'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>requisite</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>optional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pci</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hostdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>random</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </rng>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>path</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>handle</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </filesystem>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emulator</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>external</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>2.0</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </tpm>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </redirdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </channel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </crypto>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>passt</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </interface>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>isa</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </panic>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <console supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>null</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dev</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pipe</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stdio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>udp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tcp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </console>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='features'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vapic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>runtime</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>synic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stimer</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reset</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ipi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>avic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hyperv>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.453 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:18 np0005592158 nova_compute[220416]: <domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <os supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>rom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pflash</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>yes</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='secure'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>no</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </loader>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </os>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>on</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>off</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 python3.9[221098]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </blockers>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </mode>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <value>memfd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </memoryBacking>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>disk</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>floppy</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>lun</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ide</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>fdc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>sata</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </disk>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vnc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </graphics>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <video supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vga</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>none</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>bochs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </video>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='mode'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>requisite</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>optional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pci</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>scsi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hostdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>random</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>egd</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </rng>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>path</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>handle</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </filesystem>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emulator</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>external</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>2.0</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </tpm>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='bus'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>usb</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </redirdev>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </channel>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>builtin</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </crypto>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>default</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>passt</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </interface>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='model'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>isa</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </panic>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <console supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='type'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>null</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vc</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pty</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dev</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>file</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>pipe</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stdio</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>udp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tcp</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>unix</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>dbus</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </console>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </devices>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <enum name='features'>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vapic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>runtime</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>synic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>stimer</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reset</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>ipi</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>avic</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </enum>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      <defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:      </defaults>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    </hyperv>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  </features>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </domainCapabilities>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.531 220420 DEBUG nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.531 220420 INFO nova.virt.libvirt.host [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Secure Boot support detected#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.534 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.534 220420 INFO nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.545 220420 DEBUG nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 08:55:18 np0005592158 nova_compute[220416]:  <model>Nehalem</model>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: </cpu>
Jan 22 08:55:18 np0005592158 nova_compute[220416]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.548 220420 DEBUG nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.632 220420 INFO nova.virt.node [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.659 220420 WARNING nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Compute nodes ['9903a6f8-fb0a-4d8e-b632-398eaedd969e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.703 220420 INFO nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.752 220420 WARNING nova.compute.manager [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.753 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.754 220420 DEBUG nova.compute.resource_tracker [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 08:55:18 np0005592158 nova_compute[220416]: 2026-01-22 13:55:18.754 220420 DEBUG oslo_concurrency.processutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:19 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/532836915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.253 220420 DEBUG oslo_concurrency.processutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:19 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:55:19 np0005592158 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:55:19 np0005592158 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.609 220420 WARNING nova.virt.libvirt.driver [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG nova.compute.resource_tracker [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5299MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.611 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:19 np0005592158 python3.9[221315]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:55:19 np0005592158 systemd[1]: Stopping nova_compute container...
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.795 220420 DEBUG oslo_concurrency.lockutils [None req-c26f26d9-77c0-4e31-8288-412d2a428b9d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:19 np0005592158 nova_compute[220416]: 2026-01-22 13:55:19.796 220420 DEBUG oslo_concurrency.lockutils [None req-54d1563f-94bb-47b1-8b7a-3a840b5cc9c0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:20 np0005592158 virtqemud[220928]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 08:55:20 np0005592158 virtqemud[220928]: hostname: compute-1
Jan 22 08:55:20 np0005592158 virtqemud[220928]: End of file while reading data: Input/output error
Jan 22 08:55:20 np0005592158 systemd[1]: libpod-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a.scope: Deactivated successfully.
Jan 22 08:55:20 np0005592158 systemd[1]: libpod-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a.scope: Consumed 4.199s CPU time.
Jan 22 08:55:20 np0005592158 podman[221321]: 2026-01-22 13:55:20.339414361 +0000 UTC m=+0.588968318 container died 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 22 08:55:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:20.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:20 np0005592158 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:20 np0005592158 systemd[1]: var-lib-containers-storage-overlay-cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac-merged.mount: Deactivated successfully.
Jan 22 08:55:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:23 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:23 np0005592158 podman[221321]: 2026-01-22 13:55:23.074506178 +0000 UTC m=+3.324060125 container cleanup 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 08:55:23 np0005592158 podman[221321]: nova_compute
Jan 22 08:55:23 np0005592158 podman[221355]: nova_compute
Jan 22 08:55:23 np0005592158 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 08:55:23 np0005592158 systemd[1]: Stopped nova_compute container.
Jan 22 08:55:23 np0005592158 systemd[1]: Starting nova_compute container...
Jan 22 08:55:23 np0005592158 podman[221368]: 2026-01-22 13:55:23.312930392 +0000 UTC m=+0.144641350 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 08:55:23 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:55:23 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:23 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:23 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:23 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:23 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb57ec80f67d7d6847e36490c1aece2d6b4c7211f0840cc8c85095f4ddd5c0ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:23 np0005592158 podman[221369]: 2026-01-22 13:55:23.442773337 +0000 UTC m=+0.267504664 container init 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 22 08:55:23 np0005592158 podman[221369]: 2026-01-22 13:55:23.450764407 +0000 UTC m=+0.275495704 container start 026f0c814fdad2eb16abf9c007c9103190d38a095777b87174b3489312fc6b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:55:23 np0005592158 podman[221369]: nova_compute
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + sudo -E kolla_set_configs
Jan 22 08:55:23 np0005592158 systemd[1]: Started nova_compute container.
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Validating config file
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying service configuration files
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Writing out command to execute
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:23 np0005592158 nova_compute[221400]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:23 np0005592158 nova_compute[221400]: ++ cat /run_command
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + CMD=nova-compute
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + ARGS=
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + sudo kolla_copy_cacerts
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + [[ ! -n '' ]]
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + . kolla_extend_start
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:23 np0005592158 nova_compute[221400]: Running command: 'nova-compute'
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + umask 0022
Jan 22 08:55:23 np0005592158 nova_compute[221400]: + exec nova-compute
Jan 22 08:55:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:24 np0005592158 python3.9[221567]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:24 np0005592158 systemd[1]: Started libpod-conmon-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope.
Jan 22 08:55:24 np0005592158 systemd[1]: Started libcrun container.
Jan 22 08:55:24 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:24 np0005592158 podman[221592]: 2026-01-22 13:55:24.598884991 +0000 UTC m=+0.126522654 container init 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 22 08:55:24 np0005592158 podman[221592]: 2026-01-22 13:55:24.61121054 +0000 UTC m=+0.138848183 container start 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 22 08:55:24 np0005592158 python3.9[221567]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 08:55:24 np0005592158 nova_compute_init[221613]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 08:55:24 np0005592158 systemd[1]: libpod-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope: Deactivated successfully.
Jan 22 08:55:24 np0005592158 podman[221627]: 2026-01-22 13:55:24.711117303 +0000 UTC m=+0.024239117 container died 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 08:55:24 np0005592158 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:24 np0005592158 systemd[1]: var-lib-containers-storage-overlay-ae0cff6674c9c3a87c62f9af7a9880fa0c0580f48f5065ae7c4df316d438a506-merged.mount: Deactivated successfully.
Jan 22 08:55:24 np0005592158 podman[221627]: 2026-01-22 13:55:24.752036866 +0000 UTC m=+0.065158630 container cleanup 4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 08:55:24 np0005592158 systemd[1]: libpod-conmon-4dfd2302381300ceaae8150882466b81aa1f5024d159d8169f4c727b714fe739.scope: Deactivated successfully.
Jan 22 08:55:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.738 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.738 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.739 221408 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.739 221408 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:25 np0005592158 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 08:55:25 np0005592158 systemd[1]: session-49.scope: Consumed 2min 3.474s CPU time.
Jan 22 08:55:25 np0005592158 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Jan 22 08:55:25 np0005592158 systemd-logind[787]: Removed session 49.
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.913 221408 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.938 221408 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:25 np0005592158 nova_compute[221400]: 2026-01-22 13:55:25.939 221408 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.557 221408 INFO nova.virt.driver [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.677 221408 INFO nova.compute.provider_config [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.695 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_concurrency.lockutils [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.696 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.697 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.698 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.699 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.700 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.701 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.702 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.703 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.704 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.705 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.706 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.707 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.708 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.709 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.710 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.711 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.712 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.713 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.714 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.715 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.716 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.717 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.718 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.719 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.720 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.721 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.722 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.723 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.724 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.725 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.726 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.727 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.728 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.729 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.730 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.731 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.732 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.733 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.734 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.735 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.736 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.737 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.738 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.739 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.740 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.741 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.742 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.743 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.744 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.745 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.746 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.747 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.748 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.749 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.750 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.751 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.752 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.753 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.754 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.755 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.756 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.757 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.758 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.759 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.760 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.761 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.762 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.763 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.764 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.765 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.766 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.767 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.768 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.769 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.770 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.771 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.772 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.773 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.774 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.775 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.776 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.777 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.777 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.778 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.779 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.780 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.781 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.782 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.783 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.784 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.785 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.786 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.787 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.788 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.789 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.790 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.791 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.792 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.793 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.794 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.795 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.796 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 WARNING oslo_config.cfg [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:26 np0005592158 nova_compute[221400]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:26 np0005592158 nova_compute[221400]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:26 np0005592158 nova_compute[221400]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:26 np0005592158 nova_compute[221400]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.797 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.798 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.799 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.800 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.801 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.802 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.803 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.804 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.805 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.806 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.807 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.808 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.809 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.810 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.811 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.812 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.813 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.814 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.815 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.816 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.817 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.818 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.819 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.820 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.821 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.822 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.823 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.824 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.825 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.826 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.827 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.828 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.829 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.830 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.831 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.832 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.833 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.834 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.835 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.836 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.837 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.838 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.839 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.840 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.841 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.842 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.843 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.844 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.845 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.846 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.847 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.848 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.849 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.850 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.851 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.852 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.853 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.854 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.855 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.856 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.857 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.858 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.859 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.860 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.861 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.862 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.863 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.864 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.865 221408 DEBUG oslo_service.service [None req-08563704-7add-4efc-b63e-1f2611a559c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.867 221408 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.883 221408 INFO nova.virt.node [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.884 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.885 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.897 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f120a650b50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.899 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f120a650b50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.900 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.907 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <host>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <uuid>2198fae5-1aa3-4940-83f6-677ed40734bb</uuid>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <cpu>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <arch>x86_64</arch>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <vendor>AMD</vendor>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <microcode version='16777317'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='x2apic'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='tsc-deadline'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='osxsave'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='hypervisor'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='tsc_adjust'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='spec-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='stibp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='arch-capabilities'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='ssbd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='cmp_legacy'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='topoext'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='virt-ssbd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='lbrv'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='tsc-scale'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='vmcb-clean'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='pause-filter'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='pfthreshold'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='rdctl-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='mds-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </cpu>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <power_management>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <suspend_mem/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </power_management>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <iommu support='no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <migration_features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <live/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <uri_transports>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </uri_transports>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </migration_features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <topology>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <cells num='1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <cell id='0'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <distances>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <sibling id='0' value='10'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          </distances>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          <cpus num='8'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:          </cpus>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        </cell>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </cells>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </topology>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <cache>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </cache>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <secmodel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model>selinux</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <doi>0</doi>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </secmodel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <secmodel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model>dac</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <doi>0</doi>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </secmodel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </host>
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <guest>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <os_type>hvm</os_type>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <arch name='i686'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <wordsize>32</wordsize>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <domain type='qemu'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <domain type='kvm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </arch>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <pae/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <nonpae/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <apic default='on' toggle='no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <cpuselection/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <deviceboot/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <externalSnapshot/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </guest>
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <guest>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <os_type>hvm</os_type>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <arch name='x86_64'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <wordsize>64</wordsize>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <domain type='qemu'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <domain type='kvm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </arch>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <apic default='on' toggle='no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <cpuselection/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <deviceboot/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <externalSnapshot/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </features>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </guest>
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 
Jan 22 08:55:26 np0005592158 nova_compute[221400]: </capabilities>
Jan 22 08:55:26 np0005592158 nova_compute[221400]: #033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.913 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:26 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.917 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:26 np0005592158 nova_compute[221400]: <domainCapabilities>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <domain>kvm</domain>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <arch>i686</arch>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <vcpu max='240'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <iothreads supported='yes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <os supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <enum name='firmware'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <loader supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>rom</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>pflash</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='readonly'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>yes</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='secure'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </loader>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </os>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <cpu>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='maximumMigratable'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <vendor>AMD</vendor>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='succor'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <mode name='custom' supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Denverton'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v5'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SierraForest'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Snowridge'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='athlon'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='athlon-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='core2duo'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='core2duo-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='coreduo'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='coreduo-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='n270'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='n270-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='phenom'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <blockers model='phenom-v1'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </cpu>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <memoryBacking supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <enum name='sourceType'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <value>file</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <value>anonymous</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <value>memfd</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  </memoryBacking>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:  <devices>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <disk supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='diskDevice'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>disk</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>cdrom</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>floppy</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>lun</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>ide</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>fdc</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>sata</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </disk>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <graphics supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>vnc</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>egl-headless</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </graphics>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <video supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='modelType'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>vga</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>cirrus</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>none</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>bochs</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>ramfb</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </video>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <hostdev supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='mode'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>subsystem</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='startupPolicy'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>mandatory</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>requisite</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>optional</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='subsysType'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>pci</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='capsType'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='pciBackend'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </hostdev>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <rng supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>random</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>egd</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </rng>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <filesystem supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='driverType'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>path</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>handle</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>virtiofs</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </filesystem>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <tpm supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>tpm-tis</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>tpm-crb</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>emulator</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>external</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='backendVersion'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>2.0</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </tpm>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <redirdev supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </redirdev>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <channel supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    </channel>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:    <crypto supported='yes'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='model'/>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:        <value>qemu</value>
Jan 22 08:55:26 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </crypto>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <interface supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>passt</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </interface>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <panic supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>isa</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>hyperv</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </panic>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <console supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>null</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dev</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pipe</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stdio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>udp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tcp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu-vdagent</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </console>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <gic supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <genid supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backup supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <async-teardown supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <s390-pv supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <ps2 supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tdx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sev supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sgx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hyperv supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='features'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>relaxed</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vapic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>spinlocks</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vpindex</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>runtime</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>synic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stimer</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reset</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vendor_id</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>frequencies</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reenlightenment</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tlbflush</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ipi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>avic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emsr_bitmap</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>xmm_input</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hyperv>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <launchSecurity supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: </domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.924 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:27 np0005592158 nova_compute[221400]: <domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <domain>kvm</domain>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <arch>i686</arch>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <vcpu max='4096'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <iothreads supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <os supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='firmware'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <loader supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>rom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pflash</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='readonly'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>yes</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='secure'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </loader>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </os>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='maximumMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <vendor>AMD</vendor>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='succor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='custom' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <memoryBacking supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='sourceType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>anonymous</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>memfd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </memoryBacking>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <disk supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='diskDevice'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>disk</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cdrom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>floppy</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>lun</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>fdc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>sata</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </disk>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <graphics supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vnc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egl-headless</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </graphics>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <video supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='modelType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vga</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cirrus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>none</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>bochs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ramfb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </video>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hostdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='mode'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>subsystem</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='startupPolicy'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>mandatory</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>requisite</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>optional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='subsysType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pci</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='capsType'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='pciBackend'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hostdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <rng supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>random</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </rng>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <filesystem supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='driverType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>path</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>handle</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtiofs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </filesystem>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tpm supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-tis</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-crb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emulator</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>external</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendVersion'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>2.0</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </tpm>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <redirdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </redirdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <channel supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </channel>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <crypto supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </crypto>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <interface supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>passt</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </interface>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <panic supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>isa</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>hyperv</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </panic>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <console supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>null</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dev</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pipe</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stdio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>udp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tcp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu-vdagent</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </console>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <gic supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <genid supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backup supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <async-teardown supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <s390-pv supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <ps2 supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tdx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sev supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sgx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hyperv supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='features'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>relaxed</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vapic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>spinlocks</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vpindex</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>runtime</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>synic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stimer</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reset</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vendor_id</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>frequencies</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reenlightenment</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tlbflush</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ipi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>avic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emsr_bitmap</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>xmm_input</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hyperv>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <launchSecurity supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: </domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.985 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.988 221408 DEBUG nova.virt.libvirt.volume.mount [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:26.992 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:27 np0005592158 nova_compute[221400]: <domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <domain>kvm</domain>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <arch>x86_64</arch>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <vcpu max='240'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <iothreads supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <os supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='firmware'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <loader supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>rom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pflash</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='readonly'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>yes</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='secure'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </loader>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </os>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='maximumMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <vendor>AMD</vendor>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='succor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='custom' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <memoryBacking supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='sourceType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>anonymous</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>memfd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </memoryBacking>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <disk supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='diskDevice'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>disk</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cdrom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>floppy</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>lun</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ide</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>fdc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>sata</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </disk>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <graphics supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vnc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egl-headless</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </graphics>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <video supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='modelType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vga</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cirrus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>none</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>bochs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ramfb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </video>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hostdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='mode'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>subsystem</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='startupPolicy'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>mandatory</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>requisite</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>optional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='subsysType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pci</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='capsType'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='pciBackend'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hostdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <rng supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>random</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </rng>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <filesystem supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='driverType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>path</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>handle</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtiofs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </filesystem>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tpm supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-tis</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-crb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emulator</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>external</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendVersion'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>2.0</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </tpm>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <redirdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </redirdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <channel supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </channel>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <crypto supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </crypto>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <interface supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>passt</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </interface>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <panic supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>isa</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>hyperv</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </panic>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <console supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>null</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dev</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pipe</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stdio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>udp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tcp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu-vdagent</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </console>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <gic supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <genid supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backup supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <async-teardown supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <s390-pv supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <ps2 supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tdx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sev supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sgx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hyperv supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='features'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>relaxed</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vapic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>spinlocks</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vpindex</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>runtime</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>synic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stimer</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reset</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vendor_id</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>frequencies</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reenlightenment</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tlbflush</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ipi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>avic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emsr_bitmap</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>xmm_input</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hyperv>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <launchSecurity supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: </domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.074 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:27 np0005592158 nova_compute[221400]: <domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <domain>kvm</domain>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <arch>x86_64</arch>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <vcpu max='4096'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <iothreads supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <os supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='firmware'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>efi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <loader supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>rom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pflash</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='readonly'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>yes</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='secure'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>yes</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>no</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </loader>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </os>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='maximumMigratable'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>on</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>off</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <vendor>AMD</vendor>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='succor'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <mode name='custom' supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ddpd-u'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sha512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm3'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sm4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Denverton-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amd-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='auto-ibrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='perfmon-v2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbpb'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='stibp-always-on'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='EPYC-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-128'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-256'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx10-512'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='prefetchiti'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Haswell-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512er'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512pf'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fma4'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tbm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xop'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='amx-tile'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-bf16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-fp16'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bitalg'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrc'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fzrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='la57'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='taa-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ifma'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cmpccxadd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fbsdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='fsrs'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ibrs-all'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='intel-psfd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='lam'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mcdt-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pbrsb-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='psdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='serialize'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vaes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='hle'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='rtm'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512bw'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512cd'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512dq'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512f'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='avx512vl'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='invpcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pcid'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='pku'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='mpx'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='core-capability'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='split-lock-detect'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='cldemote'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='erms'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='gfni'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdir64b'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='movdiri'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='xsaves'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='athlon-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='core2duo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='coreduo-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='n270-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='ss'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <blockers model='phenom-v1'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnow'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <feature name='3dnowext'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </blockers>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </mode>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <memoryBacking supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <enum name='sourceType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>anonymous</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <value>memfd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </memoryBacking>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <disk supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='diskDevice'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>disk</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cdrom</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>floppy</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>lun</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>fdc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>sata</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </disk>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <graphics supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vnc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egl-headless</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </graphics>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <video supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='modelType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vga</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>cirrus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>none</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>bochs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ramfb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </video>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hostdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='mode'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>subsystem</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='startupPolicy'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>mandatory</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>requisite</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>optional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='subsysType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pci</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>scsi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='capsType'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='pciBackend'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hostdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <rng supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtio-non-transitional</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>random</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>egd</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </rng>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <filesystem supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='driverType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>path</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>handle</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>virtiofs</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </filesystem>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tpm supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-tis</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tpm-crb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emulator</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>external</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendVersion'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>2.0</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </tpm>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <redirdev supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='bus'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>usb</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </redirdev>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <channel supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </channel>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <crypto supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendModel'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>builtin</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </crypto>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <interface supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='backendType'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>default</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>passt</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </interface>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <panic supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='model'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>isa</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>hyperv</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </panic>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <console supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='type'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>null</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vc</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pty</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dev</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>file</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>pipe</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stdio</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>udp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tcp</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>unix</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>qemu-vdagent</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>dbus</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </console>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </devices>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <gic supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <genid supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <backup supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <async-teardown supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <s390-pv supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <ps2 supported='yes'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <tdx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sev supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <sgx supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <hyperv supported='yes'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <enum name='features'>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>relaxed</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vapic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>spinlocks</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vpindex</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>runtime</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>synic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>stimer</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reset</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>vendor_id</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>frequencies</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>reenlightenment</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>tlbflush</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>ipi</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>avic</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>emsr_bitmap</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <value>xmm_input</value>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </enum>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      <defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:      </defaults>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    </hyperv>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:    <launchSecurity supported='no'/>
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  </features>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: </domainCapabilities>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.170 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.177 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Secure Boot support detected#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.180 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.180 221408 INFO nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.191 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 08:55:27 np0005592158 nova_compute[221400]:  <model>Nehalem</model>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: </cpu>
Jan 22 08:55:27 np0005592158 nova_compute[221400]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.193 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.233 221408 INFO nova.virt.node [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Determined node identity 9903a6f8-fb0a-4d8e-b632-398eaedd969e from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.251 221408 WARNING nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute nodes ['9903a6f8-fb0a-4d8e-b632-398eaedd969e'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.300 221408 INFO nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 WARNING nova.compute.manager [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.342 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.343 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.343 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:27 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4194248427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.820 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.978 221408 WARNING nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5284MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:27 np0005592158 nova_compute[221400]: 2026-01-22 13:55:27.979 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:28.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:28 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:28 np0005592158 nova_compute[221400]: 2026-01-22 13:55:28.926 221408 WARNING nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] No compute node record for compute-1.ctlplane.example.com:9903a6f8-fb0a-4d8e-b632-398eaedd969e: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 9903a6f8-fb0a-4d8e-b632-398eaedd969e could not be found.#033[00m
Jan 22 08:55:28 np0005592158 nova_compute[221400]: 2026-01-22 13:55:28.951 221408 INFO nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 9903a6f8-fb0a-4d8e-b632-398eaedd969e#033[00m
Jan 22 08:55:29 np0005592158 nova_compute[221400]: 2026-01-22 13:55:29.005 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 08:55:29 np0005592158 nova_compute[221400]: 2026-01-22 13:55:29.006 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 08:55:29 np0005592158 nova_compute[221400]: 2026-01-22 13:55:29.367 221408 INFO nova.scheduler.client.report [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] [req-8da87f40-a021-43f4-bf70-abf636307ede] Created resource provider record via placement API for resource provider with UUID 9903a6f8-fb0a-4d8e-b632-398eaedd969e and name compute-1.ctlplane.example.com.#033[00m
Jan 22 08:55:29 np0005592158 nova_compute[221400]: 2026-01-22 13:55:29.631 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:29 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:30 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1504024635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.226 221408 DEBUG oslo_concurrency.processutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.231 221408 DEBUG nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 22 08:55:30 np0005592158 nova_compute[221400]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.231 221408 INFO nova.virt.libvirt.host [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.232 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating inventory in ProviderTree for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.232 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 08:55:30 np0005592158 nova_compute[221400]: 2026-01-22 13:55:30.235 221408 DEBUG nova.virt.libvirt.driver [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Libvirt baseline CPU <cpu>
Jan 22 08:55:30 np0005592158 nova_compute[221400]:  <arch>x86_64</arch>
Jan 22 08:55:30 np0005592158 nova_compute[221400]:  <model>Nehalem</model>
Jan 22 08:55:30 np0005592158 nova_compute[221400]:  <vendor>AMD</vendor>
Jan 22 08:55:30 np0005592158 nova_compute[221400]:  <topology sockets="8" cores="1" threads="1"/>
Jan 22 08:55:30 np0005592158 nova_compute[221400]: </cpu>
Jan 22 08:55:30 np0005592158 nova_compute[221400]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Jan 22 08:55:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:30.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:55:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3652 writes, 21K keys, 3652 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3652 writes, 3652 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1644 writes, 8820 keys, 1644 commit groups, 1.0 writes per commit group, ingest: 15.75 MB, 0.03 MB/s#012Interval WAL: 1644 writes, 1644 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     39.5      0.60              0.07        11    0.054       0      0       0.0       0.0#012  L6      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    109.9     92.5      0.90              0.22        10    0.090     53K   5362       0.0       0.0#012 Sum      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     66.0     71.3      1.50              0.29        21    0.071     53K   5362       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.5     59.6     59.6      0.98              0.16        12    0.081     35K   3554       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    109.9     92.5      0.90              0.22        10    0.090     53K   5362       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     39.6      0.60              0.07        10    0.060       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.023, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 7.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(355,6.60 MB,2.1698%) FilterBlock(21,158.98 KB,0.0510718%) IndexBlock(21,261.39 KB,0.0839685%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:55:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:32.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.859 221408 DEBUG nova.scheduler.client.report [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updated inventory for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.860 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.860 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating inventory in ProviderTree for provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.947 221408 DEBUG nova.compute.provider_tree [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Updating resource provider 9903a6f8-fb0a-4d8e-b632-398eaedd969e generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.968 221408 DEBUG nova.compute.resource_tracker [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.969 221408 DEBUG oslo_concurrency.lockutils [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:32 np0005592158 nova_compute[221400]: 2026-01-22 13:55:32.969 221408 DEBUG nova.service [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 22 08:55:33 np0005592158 nova_compute[221400]: 2026-01-22 13:55:33.029 221408 DEBUG nova.service [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 22 08:55:33 np0005592158 nova_compute[221400]: 2026-01-22 13:55:33.030 221408 DEBUG nova.servicegroup.drivers.db [None req-e7299c76-2051-4f93-a8ab-f4b68a946603 - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 22 08:55:33 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:35 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:55:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:55:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:55:36 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:55:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:55:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:37 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:38 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:38.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:38.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:39 np0005592158 podman[221999]: 2026-01-22 13:55:39.150914643 +0000 UTC m=+0.141602585 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 08:55:39 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:39 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:40 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:41 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:55:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:55:42 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:43 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:43 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:44 np0005592158 nova_compute[221400]: 2026-01-22 13:55:44.032 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 08:55:44 np0005592158 nova_compute[221400]: 2026-01-22 13:55:44.057 221408 DEBUG oslo_service.periodic_task [None req-1f1a45d4-9599-4b52-a32b-c25b82c14df8 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 08:55:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:45 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:46 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:46 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.428 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 13:55:47.429 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:47 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:49 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:50 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:50 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:50.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:51 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:51 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:52 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:52.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:53 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:54 np0005592158 podman[222073]: 2026-01-22 13:55:54.072978264 +0000 UTC m=+0.057396147 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 08:55:54 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:54 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:55 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:56 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:56.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:57 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:58.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:58 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:55:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:58.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:59 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:59 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:00.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:00 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:02 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:56:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:56:03 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:56:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:06 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:56:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:56:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:06.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:56:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:08.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:08 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 08:56:09 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 08:56:10 np0005592158 podman[222091]: 2026-01-22 13:56:10.107847656 +0000 UTC m=+0.094539211 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 08:56:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:10 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:13 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:56:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:14 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1164 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:56:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:15 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:56:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:56:16 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:56:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:18.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:56:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:56:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:19 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:56:19 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:20 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:56:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:56:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:22.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:22 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:22.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:56:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:24.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:24 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:56:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:56:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 08:56:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:56:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:56:25 np0005592158 podman[222118]: 2026-01-22 13:56:25.095915759 +0000 UTC m=+0.082036160 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:56:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:03:29 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:29 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:30 np0005592158 rsyslogd[1007]: imjournal: 2792 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 22 09:03:30 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:32 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:33.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:33.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:33 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:34 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:34 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:35.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:35.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:35 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:36 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:37 np0005592158 podman[224128]: 2026-01-22 14:03:37.064514553 +0000 UTC m=+0.053352092 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 09:03:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:37.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:37 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:39 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:39.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:39.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:40 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:40 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:41.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:41.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:41 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:42 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:42 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:43.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:43.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:43 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:45 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:45 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:46 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:46 np0005592158 podman[224319]: 2026-01-22 14:03:46.644577282 +0000 UTC m=+0.064271591 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 22 09:03:46 np0005592158 podman[224319]: 2026-01-22 14:03:46.745043454 +0000 UTC m=+0.164737743 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:03:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:47 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.438 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:03:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:03:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:47.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:03:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:49.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:50 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:51.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:51 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:52 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:53 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:55 np0005592158 podman[224690]: 2026-01-22 14:03:55.097880142 +0000 UTC m=+0.087935748 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 09:03:55 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:55.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:56 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:57.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:57 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:58 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:03:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:59.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:59 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:00 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:01.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:01 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:04 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:05.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:06 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:07 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:08 np0005592158 podman[224768]: 2026-01-22 14:04:08.103000505 +0000 UTC m=+0.091359463 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:04:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:09 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:09.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:10 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:12 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:13 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:13.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:14 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:16 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:16 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:17 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:17.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:04:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:04:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:04:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:04:18 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:19.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:19 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:20 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:20 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:21 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:22 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:23.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:23.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:23 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:24 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:25 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:26 np0005592158 podman[224787]: 2026-01-22 14:04:26.090755369 +0000 UTC m=+0.085348169 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:04:27 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:28 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:29 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:29.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:30 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:30 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:32 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:33 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:04:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.5 total, 600.0 interval#012Cumulative writes: 7297 writes, 27K keys, 7297 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7297 writes, 1549 syncs, 4.71 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 710 writes, 1494 keys, 710 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 710 writes, 312 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:04:34 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:34 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:35.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:35 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:36 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:38 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:39 np0005592158 podman[224813]: 2026-01-22 14:04:39.060551512 +0000 UTC m=+0.051601305 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:04:39 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:39.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:40 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:40 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:04:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:04:41 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:41.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:42 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:42 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:43 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:44 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:45 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:47 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.439 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.440 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:04:47.440 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:49 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:51 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:51 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:52 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:53 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:54 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:55 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:55 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:55.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:56 np0005592158 podman[224856]: 2026-01-22 14:04:56.299843008 +0000 UTC m=+0.095278050 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:04:56 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:57.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.444221307 +0000 UTC m=+0.045196078 container create 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:04:57 np0005592158 systemd[1]: Started libpod-conmon-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope.
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.424150548 +0000 UTC m=+0.025125339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:04:57 np0005592158 systemd[1]: Started libcrun container.
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.537175833 +0000 UTC m=+0.138150624 container init 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.544188045 +0000 UTC m=+0.145162816 container start 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.547188857 +0000 UTC m=+0.148163658 container attach 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:04:57 np0005592158 sweet_hugle[225143]: 167 167
Jan 22 09:04:57 np0005592158 systemd[1]: libpod-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope: Deactivated successfully.
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.550851587 +0000 UTC m=+0.151826378 container died 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 09:04:57 np0005592158 systemd[1]: var-lib-containers-storage-overlay-cab389985ea1794aa58a3d52e87f91c7e1073325a57f831130d36db403b1f53a-merged.mount: Deactivated successfully.
Jan 22 09:04:57 np0005592158 podman[225126]: 2026-01-22 14:04:57.591895622 +0000 UTC m=+0.192870393 container remove 0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 22 09:04:57 np0005592158 systemd[1]: libpod-conmon-0fdb15f92470dbc0c72635ac99ab7511277a2d4946a2e25b88e6e66cc79b42fd.scope: Deactivated successfully.
Jan 22 09:04:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:57 np0005592158 podman[225165]: 2026-01-22 14:04:57.775313934 +0000 UTC m=+0.046844944 container create 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 09:04:57 np0005592158 systemd[1]: Started libpod-conmon-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope.
Jan 22 09:04:57 np0005592158 systemd[1]: Started libcrun container.
Jan 22 09:04:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:57 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:04:57 np0005592158 podman[225165]: 2026-01-22 14:04:57.754278138 +0000 UTC m=+0.025809178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:04:57 np0005592158 podman[225165]: 2026-01-22 14:04:57.858567614 +0000 UTC m=+0.130098634 container init 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:04:57 np0005592158 podman[225165]: 2026-01-22 14:04:57.865576905 +0000 UTC m=+0.137107915 container start 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:04:57 np0005592158 podman[225165]: 2026-01-22 14:04:57.869907505 +0000 UTC m=+0.141438515 container attach 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:04:58 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:59 np0005592158 zealous_golick[225182]: [
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:    {
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "available": false,
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "ceph_device": false,
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "lsm_data": {},
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "lvs": [],
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "path": "/dev/sr0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "rejected_reasons": [
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "Has a FileSystem",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "Insufficient space (<5GB)"
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        ],
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        "sys_api": {
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "actuators": null,
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "device_nodes": "sr0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "devname": "sr0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "human_readable_size": "482.00 KB",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "id_bus": "ata",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "model": "QEMU DVD-ROM",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "nr_requests": "2",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "parent": "/dev/sr0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "partitions": {},
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "path": "/dev/sr0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "removable": "1",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "rev": "2.5+",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "ro": "0",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "rotational": "1",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "sas_address": "",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "sas_device_handle": "",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "scheduler_mode": "mq-deadline",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "sectors": 0,
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "sectorsize": "2048",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "size": 493568.0,
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "support_discard": "2048",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "type": "disk",
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:            "vendor": "QEMU"
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:        }
Jan 22 09:04:59 np0005592158 zealous_golick[225182]:    }
Jan 22 09:04:59 np0005592158 zealous_golick[225182]: ]
Jan 22 09:04:59 np0005592158 systemd[1]: libpod-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Deactivated successfully.
Jan 22 09:04:59 np0005592158 systemd[1]: libpod-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Consumed 1.297s CPU time.
Jan 22 09:04:59 np0005592158 podman[225165]: 2026-01-22 14:04:59.150141664 +0000 UTC m=+1.421672694 container died 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:04:59 np0005592158 systemd[1]: var-lib-containers-storage-overlay-754dae6e5a81cb00d0d5bb0d116eba1157b0548dc75449a84ebd407f3630c11e-merged.mount: Deactivated successfully.
Jan 22 09:04:59 np0005592158 podman[225165]: 2026-01-22 14:04:59.212884581 +0000 UTC m=+1.484415591 container remove 498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_golick, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 09:04:59 np0005592158 systemd[1]: libpod-conmon-498cd66a05e057183febadcad53d4ebf55699899b96588d33004c025a226d8e5.scope: Deactivated successfully.
Jan 22 09:04:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:59 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:04:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:00 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:00 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:05:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:01.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:02 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:03.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:03 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:04 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:04 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:05 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:06 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:07 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:10 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:10 np0005592158 podman[226422]: 2026-01-22 14:05:10.125158532 +0000 UTC m=+0.098274523 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 09:05:11 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:11 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:12 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:13 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:14 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:15.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:15 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:15 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:16 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:17.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:17.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:17 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:19 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:19.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:20 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:20 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:21 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:22 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:23.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:23 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:25 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:25 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:26 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:27 np0005592158 podman[226443]: 2026-01-22 14:05:27.108979779 +0000 UTC m=+0.093842950 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:05:27 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:27.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:28 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:29 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:05:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5485 writes, 31K keys, 5485 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 5485 writes, 5485 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1833 writes, 9373 keys, 1833 commit groups, 1.0 writes per commit group, ingest: 16.85 MB, 0.03 MB/s#012Interval WAL: 1833 writes, 1833 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     40.7      0.81              0.10        16    0.051       0      0       0.0       0.0#012  L6      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8    120.1    100.6      1.26              0.35        15    0.084     86K   7950       0.0       0.0#012 Sum      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.8     73.0     77.1      2.07              0.45        31    0.067     86K   7950       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6     91.3     92.2      0.57              0.17        10    0.057     33K   2588       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    120.1    100.6      1.26              0.35        15    0.084     86K   7950       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     40.7      0.81              0.10        15    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.09 MB/s write, 0.15 GB read, 0.08 MB/s read, 2.1 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 14.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(750,13.68 MB,4.50078%) FilterBlock(31,253.67 KB,0.081489%) IndexBlock(31,393.67 KB,0.126462%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:05:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:33.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:33 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:35 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:35 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:35.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:35.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:36 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:37.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:37 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:38 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:39 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:40 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:41 np0005592158 podman[226470]: 2026-01-22 14:05:41.064430917 +0000 UTC m=+0.051977395 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:05:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:41 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:43 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:43.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:44 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:45 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:45 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:46 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:05:47.441 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:47 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:49 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:50 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:50 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:51 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:52 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:53.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:53 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:54 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:55.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:55 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:56 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:57 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:58 np0005592158 podman[226489]: 2026-01-22 14:05:58.107003294 +0000 UTC m=+0.101434159 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:05:58 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:05:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:59.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:59 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:00 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:01 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:03 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:03.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:03.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:04 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:04 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:05.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:05.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.831384) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765831427, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2716, "num_deletes": 506, "total_data_size": 5078584, "memory_usage": 5159296, "flush_reason": "Manual Compaction"}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765853269, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29067, "largest_seqno": 31778, "table_properties": {"data_size": 3266581, "index_size": 5556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27698, "raw_average_key_size": 20, "raw_value_size": 3242643, "raw_average_value_size": 2389, "num_data_blocks": 243, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090600, "oldest_key_time": 1769090600, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 21946 microseconds, and 9223 cpu microseconds.
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.853328) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3276667 bytes OK
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.853353) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855106) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855125) EVENT_LOG_v1 {"time_micros": 1769090765855119, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.855145) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5065282, prev total WAL file size 5065282, number of live WAL files 2.
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.856870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3199KB)], [57(8704KB)]
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765856957, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 12189581, "oldest_snapshot_seqno": -1}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6972 keys, 10360528 bytes, temperature: kUnknown
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765946804, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 10360528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10316156, "index_size": 25828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183838, "raw_average_key_size": 26, "raw_value_size": 10190947, "raw_average_value_size": 1461, "num_data_blocks": 1022, "num_entries": 6972, "num_filter_entries": 6972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.947138) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10360528 bytes
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.950196) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8002, records dropped: 1030 output_compression: NoCompression
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.950216) EVENT_LOG_v1 {"time_micros": 1769090765950207, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89939, "compaction_time_cpu_micros": 35262, "output_level": 6, "num_output_files": 1, "total_output_size": 10360528, "num_input_records": 8002, "num_output_records": 6972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765950888, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765953038, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.856637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:06:05.953258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:06 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:07 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:08 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:09.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:06:10 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:11 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:11.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:06:12 np0005592158 podman[226766]: 2026-01-22 14:06:12.065060504 +0000 UTC m=+0.054491163 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:06:12 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:13.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:13.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:14 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:15 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:15 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:15.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:17.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:17 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:17.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:06:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:06:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:06:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:06:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:20 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:20 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:21 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:21.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:21.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:22 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:22 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:23.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:23.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:24 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:25 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:25 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:25.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:28 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:28 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:29 np0005592158 podman[226833]: 2026-01-22 14:06:29.139841447 +0000 UTC m=+0.125934614 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 09:06:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:29.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:30 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:31 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:31.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:32 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:32 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:33.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:34 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:35 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:35 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:36 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:37 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:38 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:39.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:39 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:40 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:41.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:42 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:43 np0005592158 podman[226859]: 2026-01-22 14:06:43.065374843 +0000 UTC m=+0.054847147 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:06:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:43.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:44 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:45.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:46 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.442 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.442 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:06:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:47 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:48 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:48 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:49.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:49 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:50 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:50 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:51.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:52 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:53 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:54 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:55.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:55 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:56 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:58 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:59 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:59.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:06:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:00 np0005592158 podman[226879]: 2026-01-22 14:07:00.094112677 +0000 UTC m=+0.082748348 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:00 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:00 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:01 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:01.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:02 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:02 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:03 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:04 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:04 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:05 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:07 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:08 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:09 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:09.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:10 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:10 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:11 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 09:07:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:11.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 09:07:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:11.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:12 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:14 np0005592158 podman[226906]: 2026-01-22 14:07:14.05629758 +0000 UTC m=+0.051580108 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:07:14 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:15 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:15 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:16 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:20 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:07:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:22 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:23.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:25 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:25.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:29 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:29.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:30 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:31 np0005592158 podman[227106]: 2026-01-22 14:07:31.095607459 +0000 UTC m=+0.086430487 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 22 09:07:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.005000135s ======
Jan 22 09:07:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000135s
Jan 22 09:07:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:39.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:40 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:41 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:42 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:44 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:44 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:45 np0005592158 podman[227133]: 2026-01-22 14:07:45.065247799 +0000 UTC m=+0.057922901 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:07:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:45.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:45.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.443 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:07:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:47.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:47.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:49.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:50 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:50 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:51 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:51.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:52 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:53.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:54 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:55.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:56 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:57.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:58 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:59.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:59 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:07:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:00 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:01.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:02 np0005592158 podman[227154]: 2026-01-22 14:08:02.091622161 +0000 UTC m=+0.086568322 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:08:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:05 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:06 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:07.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:09.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:11 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:11 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:11.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:13.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:13.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:15.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:16 np0005592158 podman[227180]: 2026-01-22 14:08:16.05096293 +0000 UTC m=+0.045525042 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:08:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:17 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:17.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:08:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:08:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:08:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:08:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:19.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:21.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:23.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:25 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:29 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:29.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:30.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:30 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:32.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592158 podman[227331]: 2026-01-22 14:08:33.120867467 +0000 UTC m=+0.115640053 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 09:08:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:08:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:35 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:35.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:36 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:37.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:39.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.971756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919972095, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2540, "num_deletes": 510, "total_data_size": 4637302, "memory_usage": 4704832, "flush_reason": "Manual Compaction"}
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919986062, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2305866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31784, "largest_seqno": 34318, "table_properties": {"data_size": 2297650, "index_size": 4006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26045, "raw_average_key_size": 20, "raw_value_size": 2276708, "raw_average_value_size": 1819, "num_data_blocks": 172, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090766, "oldest_key_time": 1769090766, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14355 microseconds, and 6270 cpu microseconds.
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.986117) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2305866 bytes OK
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.986139) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987407) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987419) EVENT_LOG_v1 {"time_micros": 1769090919987415, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.987435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4624724, prev total WAL file size 4686344, number of live WAL files 2.
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.988965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2251KB)], [60(10117KB)]
Jan 22 09:08:39 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919989031, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12666394, "oldest_snapshot_seqno": -1}
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 7230 keys, 9297924 bytes, temperature: kUnknown
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920056563, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 9297924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9254412, "index_size": 24328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 191668, "raw_average_key_size": 26, "raw_value_size": 9127182, "raw_average_value_size": 1262, "num_data_blocks": 948, "num_entries": 7230, "num_filter_entries": 7230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.056910) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9297924 bytes
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.058747) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.1 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.0) OK, records in: 8223, records dropped: 993 output_compression: NoCompression
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.058790) EVENT_LOG_v1 {"time_micros": 1769090920058771, "job": 36, "event": "compaction_finished", "compaction_time_micros": 67690, "compaction_time_cpu_micros": 26929, "output_level": 6, "num_output_files": 1, "total_output_size": 9297924, "num_input_records": 8223, "num_output_records": 7230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920060006, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920062632, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:39.988894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:40.062831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:41 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:41.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:42 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:43.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:44 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:45 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:45 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:45.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:46.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592158 podman[227408]: 2026-01-22 14:08:47.071873721 +0000 UTC m=+0.056719208 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:08:47.444 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:48.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:48 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:49.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:50 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:50 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:51 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:52 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.123195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933123229, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 431, "num_deletes": 251, "total_data_size": 408278, "memory_usage": 417608, "flush_reason": "Manual Compaction"}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933127206, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 268216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34323, "largest_seqno": 34749, "table_properties": {"data_size": 265866, "index_size": 450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6293, "raw_average_key_size": 19, "raw_value_size": 260953, "raw_average_value_size": 795, "num_data_blocks": 20, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090919, "oldest_key_time": 1769090919, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4057 microseconds, and 1525 cpu microseconds.
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127254) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 268216 bytes OK
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127270) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128634) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128647) EVENT_LOG_v1 {"time_micros": 1769090933128643, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128679) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 405526, prev total WAL file size 405526, number of live WAL files 2.
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(261KB)], [63(9080KB)]
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933129059, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 9566140, "oldest_snapshot_seqno": -1}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 7046 keys, 7847515 bytes, temperature: kUnknown
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933171626, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 7847515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806408, "index_size": 22371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 188633, "raw_average_key_size": 26, "raw_value_size": 7683390, "raw_average_value_size": 1090, "num_data_blocks": 861, "num_entries": 7046, "num_filter_entries": 7046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.171986) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 7847515 bytes
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.173250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.0 rd, 183.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.9) write-amplify(29.3) OK, records in: 7558, records dropped: 512 output_compression: NoCompression
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.173273) EVENT_LOG_v1 {"time_micros": 1769090933173262, "job": 38, "event": "compaction_finished", "compaction_time_micros": 42698, "compaction_time_cpu_micros": 20101, "output_level": 6, "num_output_files": 1, "total_output_size": 7847515, "num_input_records": 7558, "num_output_records": 7046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933173460, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933175698, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:08:53.175852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:54 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:55 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:55.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:56 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:57.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:58.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:58 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:08:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:59.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:59 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:00.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:00 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:01.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:01 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:02.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:04 np0005592158 podman[227427]: 2026-01-22 14:09:04.122512191 +0000 UTC m=+0.108721665 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:09:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:05 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:06 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:07 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:09 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:10 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:10 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:11 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:13 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:15 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:17 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:18 np0005592158 podman[227454]: 2026-01-22 14:09:18.056445897 +0000 UTC m=+0.051748182 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:09:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:18.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:20 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:21.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:22 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:23.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:29 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:30 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:33.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:34 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:35 np0005592158 podman[227474]: 2026-01-22 14:09:35.086571171 +0000 UTC m=+0.080688541 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 09:09:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:35 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:35.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:36 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:37.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:40 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:41 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:09:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:09:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:42 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:44 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:45 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:45 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:45.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:09:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:48.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:48 np0005592158 podman[227655]: 2026-01-22 14:09:48.609721878 +0000 UTC m=+0.081177945 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:09:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:48 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:49.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:49 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:49 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:50 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:51.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:52 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:53 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:53.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:54 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:55 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:56.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:56 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:57.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:59 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:09:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:59.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:00.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:01 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:01 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:01 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:01.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:02.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:05 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:05.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:06 np0005592158 podman[227699]: 2026-01-22 14:10:06.112698623 +0000 UTC m=+0.106596778 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:10:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:06 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:07 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:10 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:10 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:11 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:12.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:12.850 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:12 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:12.852 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:10:13 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:14.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:15 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:15.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:16.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:17.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:18.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:18 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:18.854 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:19 np0005592158 podman[227726]: 2026-01-22 14:10:19.074230186 +0000 UTC m=+0.059865863 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:10:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:19.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:20.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:20 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:23.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:25 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:25.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:29.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:30.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:31 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:10:31 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:10:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:33.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:34.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:34 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:35 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:36 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:37 np0005592158 podman[227746]: 2026-01-22 14:10:37.090462377 +0000 UTC m=+0.080525927 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:10:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:37.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:39.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:41 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:41 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:41.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:42.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:42 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:43.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:44 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:45 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:45 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:45.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.445 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.446 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:10:47.446 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:48.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:48 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:49 np0005592158 podman[227844]: 2026-01-22 14:10:49.178612157 +0000 UTC m=+0.056291991 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:10:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:50 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:10:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:10:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:10:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:50.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:51 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:51 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:52 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:52.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:53 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:54 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:55 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 09:10:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:56 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:10:56 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:56.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:57 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:10:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:58 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:59 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:00 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:01 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:02 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:03 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:04 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:06 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:07 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:08 np0005592158 podman[227971]: 2026-01-22 14:11:08.09963682 +0000 UTC m=+0.085941281 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:11:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:09 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:10 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:10 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:11 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:13 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:15.586 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:15 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:15.587 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:11:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:18 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:18.589 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:20 np0005592158 podman[227997]: 2026-01-22 14:11:20.063597646 +0000 UTC m=+0.053160798 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 09:11:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:21 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:22 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:29 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:36.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:36 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:36.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:39 np0005592158 podman[228016]: 2026-01-22 14:11:39.08962493 +0000 UTC m=+0.079806569 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:11:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:40.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:40 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:41 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:41 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:42.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:42 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:43 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:44 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.477204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105477256, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2467, "num_deletes": 251, "total_data_size": 4800566, "memory_usage": 4878776, "flush_reason": "Manual Compaction"}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105496457, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3142489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34754, "largest_seqno": 37216, "table_properties": {"data_size": 3133257, "index_size": 5342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23921, "raw_average_key_size": 21, "raw_value_size": 3112819, "raw_average_value_size": 2796, "num_data_blocks": 230, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090934, "oldest_key_time": 1769090934, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 19289 microseconds, and 9213 cpu microseconds.
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.496501) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3142489 bytes OK
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.496521) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498866) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498884) EVENT_LOG_v1 {"time_micros": 1769091105498878, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498903) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 4789274, prev total WAL file size 4789274, number of live WAL files 2.
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3068KB)], [66(7663KB)]
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105500571, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10990004, "oldest_snapshot_seqno": -1}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 7644 keys, 9277662 bytes, temperature: kUnknown
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105560407, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9277662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9232017, "index_size": 25437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203092, "raw_average_key_size": 26, "raw_value_size": 9097871, "raw_average_value_size": 1190, "num_data_blocks": 983, "num_entries": 7644, "num_filter_entries": 7644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.560970) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9277662 bytes
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.562101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 154.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8159, records dropped: 515 output_compression: NoCompression
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.562118) EVENT_LOG_v1 {"time_micros": 1769091105562109, "job": 40, "event": "compaction_finished", "compaction_time_micros": 59927, "compaction_time_cpu_micros": 25016, "output_level": 6, "num_output_files": 1, "total_output_size": 9277662, "num_input_records": 8159, "num_output_records": 7644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105562875, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105564456, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:11:45.564724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:46.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:46 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.447 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.447 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:11:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:47 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:48.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:48 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:49 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:50 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:51 np0005592158 podman[228042]: 2026-01-22 14:11:51.063649151 +0000 UTC m=+0.054771091 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:11:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:51 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:51 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:52.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:52 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:53 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:54.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:54 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:55 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:55 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:56.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:56 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:57 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:58.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:11:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:58 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:11:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:11:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:12:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:00.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:00 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:00 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:02 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:02.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:03 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:04 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:05 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:06 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:06 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:07 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:08.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:08 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:09 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:10.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:10.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:10 np0005592158 podman[228245]: 2026-01-22 14:12:10.516542498 +0000 UTC m=+0.083403364 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:12:10 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:11 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:12.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:12.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:12 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:13 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:14 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:16.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:17 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:12:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:12:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:12:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:12:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:18 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:21 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:21 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:22 np0005592158 podman[228272]: 2026-01-22 14:12:22.061943188 +0000 UTC m=+0.054620416 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:12:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:22 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:23 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:24 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:26 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:28 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:29 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:30.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:31 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:32 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:33 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:34.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:35 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:36 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:36 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:37 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:38.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:38 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:39 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:40.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:40 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:40 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:41 np0005592158 podman[228292]: 2026-01-22 14:12:41.098490911 +0000 UTC m=+0.085338106 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:12:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:42 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:43 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:44 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:45 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:46 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:46 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:47 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:12:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:48 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:49 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:50 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:51 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:51 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:52 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:53 np0005592158 podman[228320]: 2026-01-22 14:12:53.076589142 +0000 UTC m=+0.060967826 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:12:53 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:54 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:55 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:56.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:56 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:56 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:57 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:57 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:58.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:12:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:58 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:12:59 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:00 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:02 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:03 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:04 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:05 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:06 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:06 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:07 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:08 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:10.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:13:11 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:12 np0005592158 podman[228474]: 2026-01-22 14:13:12.097612579 +0000 UTC m=+0.088102269 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:13:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:13 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:14 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:14.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.722166) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195722269, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1411, "num_deletes": 256, "total_data_size": 2584745, "memory_usage": 2613200, "flush_reason": "Manual Compaction"}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195735547, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1687738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37221, "largest_seqno": 38627, "table_properties": {"data_size": 1682028, "index_size": 2850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14642, "raw_average_key_size": 20, "raw_value_size": 1669526, "raw_average_value_size": 2351, "num_data_blocks": 124, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091106, "oldest_key_time": 1769091106, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 13415 microseconds, and 7489 cpu microseconds.
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.735610) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1687738 bytes OK
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.735636) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737447) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737469) EVENT_LOG_v1 {"time_micros": 1769091195737461, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.737491) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2577840, prev total WAL file size 2577840, number of live WAL files 2.
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.738696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1648KB)], [69(9060KB)]
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195738762, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 10965400, "oldest_snapshot_seqno": -1}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7829 keys, 10801710 bytes, temperature: kUnknown
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195803424, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10801710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10753504, "index_size": 27550, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 208555, "raw_average_key_size": 26, "raw_value_size": 10614726, "raw_average_value_size": 1355, "num_data_blocks": 1068, "num_entries": 7829, "num_filter_entries": 7829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.803789) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10801710 bytes
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.805025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 166.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 8354, records dropped: 525 output_compression: NoCompression
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.805047) EVENT_LOG_v1 {"time_micros": 1769091195805036, "job": 42, "event": "compaction_finished", "compaction_time_micros": 64763, "compaction_time_cpu_micros": 25199, "output_level": 6, "num_output_files": 1, "total_output_size": 10801710, "num_input_records": 8354, "num_output_records": 7829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195805794, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195807764, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.738594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:15 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:13:15.807888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:16.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:16 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:16 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:17 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:18 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:19 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:20 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:21 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:21 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:22 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:23 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592158 podman[228550]: 2026-01-22 14:13:24.057436003 +0000 UTC m=+0.048307729 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:13:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:24 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:25 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:25 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:27 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:28 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:29 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:30 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:31 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:31 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:32 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:33 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:34.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:34 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:36 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:37 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:38 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:39 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:40.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:40 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:41 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:42 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:43 np0005592158 podman[228569]: 2026-01-22 14:13:43.101879686 +0000 UTC m=+0.094689295 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:13:44 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:45 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:46 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:46 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:46.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:47 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.448 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:13:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:48 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:49 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:50 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:50.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:51 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:51 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:52 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:52.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:53 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:54 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:55 np0005592158 podman[228597]: 2026-01-22 14:13:55.064475642 +0000 UTC m=+0.054821391 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:13:55 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:56 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:57 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:58 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:13:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:59 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:00 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:00.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:01 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:02.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:02 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:03 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:05 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:06 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:06 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:07 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:08 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:09 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:10 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:10.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:11 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:11 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:12 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:12 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:13 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:14 np0005592158 podman[228616]: 2026-01-22 14:14:14.104573889 +0000 UTC m=+0.086710332 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 09:14:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:14 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:17 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:18 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:18 np0005592158 podman[228814]: 2026-01-22 14:14:18.61973168 +0000 UTC m=+0.067470249 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:14:18 np0005592158 podman[228814]: 2026-01-22 14:14:18.754120092 +0000 UTC m=+0.201858661 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:14:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:19 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:14:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:20.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:21 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:21 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:22 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:22.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:23 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:24 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:25 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:26 np0005592158 podman[229070]: 2026-01-22 14:14:26.070475805 +0000 UTC m=+0.055719156 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:14:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:26 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:27 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:29 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:31 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:31 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:32 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.288876) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273288957, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1306, "num_deletes": 251, "total_data_size": 2307919, "memory_usage": 2338280, "flush_reason": "Manual Compaction"}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273300424, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1505369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38632, "largest_seqno": 39933, "table_properties": {"data_size": 1500082, "index_size": 2555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13785, "raw_average_key_size": 20, "raw_value_size": 1488459, "raw_average_value_size": 2251, "num_data_blocks": 110, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091196, "oldest_key_time": 1769091196, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 11593 microseconds, and 5073 cpu microseconds.
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.300480) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1505369 bytes OK
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.300504) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302178) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302192) EVENT_LOG_v1 {"time_micros": 1769091273302188, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2301526, prev total WAL file size 2301526, number of live WAL files 2.
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302952) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1470KB)], [72(10MB)]
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273302986, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12307079, "oldest_snapshot_seqno": -1}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7973 keys, 10596101 bytes, temperature: kUnknown
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273377944, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10596101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10547245, "index_size": 27816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 212770, "raw_average_key_size": 26, "raw_value_size": 10405996, "raw_average_value_size": 1305, "num_data_blocks": 1075, "num_entries": 7973, "num_filter_entries": 7973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.378243) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10596101 bytes
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.380296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.9 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 8490, records dropped: 517 output_compression: NoCompression
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.380322) EVENT_LOG_v1 {"time_micros": 1769091273380310, "job": 44, "event": "compaction_finished", "compaction_time_micros": 75070, "compaction_time_cpu_micros": 26789, "output_level": 6, "num_output_files": 1, "total_output_size": 10596101, "num_input_records": 8490, "num_output_records": 7973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273380856, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273382912, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.382999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:14:33.383008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:34 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:14:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.5 total, 600.0 interval#012Cumulative writes: 8392 writes, 31K keys, 8392 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8392 writes, 2025 syncs, 4.14 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1095 writes, 3258 keys, 1095 commit groups, 1.0 writes per commit group, ingest: 2.59 MB, 0.00 MB/s#012Interval WAL: 1095 writes, 476 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:14:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:35 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:36 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:36 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:37 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:38 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:38.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:39 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:40 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:41 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:41 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:42 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:43 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:44 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:44 np0005592158 podman[229140]: 2026-01-22 14:14:44.634396014 +0000 UTC m=+0.087051331 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 09:14:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:45 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:46 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:46 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.449 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:14:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:47 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:48 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:49 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:50 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:51 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:14:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:14:52 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:53 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:55 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:56 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:56 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:57 np0005592158 podman[229169]: 2026-01-22 14:14:57.057323241 +0000 UTC m=+0.046893001 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:14:57 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:58 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:14:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:59 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:00.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:00 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:01 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:02 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:03 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:04 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:06 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:06 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:07 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:08 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:08.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:09 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:10 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:10.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:11 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:11 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:12 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:13 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:14 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:14.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:15 np0005592158 podman[229189]: 2026-01-22 14:15:15.129835572 +0000 UTC m=+0.114483243 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:15:15 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:16 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:16 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:16.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:16.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:17 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:15:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:15:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:15:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:15:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:18.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:18 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:19 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:19 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:20.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:20 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:22 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:23 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:24 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:25 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:28 np0005592158 podman[229347]: 2026-01-22 14:15:28.055287303 +0000 UTC m=+0.049849991 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:15:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:29 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:30.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:31 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:15:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7321 writes, 40K keys, 7321 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 7321 writes, 7321 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1836 writes, 9586 keys, 1836 commit groups, 1.0 writes per commit group, ingest: 16.50 MB, 0.03 MB/s#012Interval WAL: 1836 writes, 1836 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.8      0.90              0.14        22    0.041       0      0       0.0       0.0#012  L6      1/0   10.11 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    130.6    109.8      1.66              0.51        21    0.079    135K    12K       0.0       0.0#012 Sum      1/0   10.11 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1     84.8     88.7      2.55              0.65        43    0.059    135K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    135.1    138.4      0.48              0.20        12    0.040     48K   4092       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    130.6    109.8      1.66              0.51        21    0.079    135K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     49.9      0.89              0.14        21    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.044, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.09 MB/s write, 0.21 GB read, 0.09 MB/s read, 2.6 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 23.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000151 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1252,22.65 MB,7.45014%) FilterBlock(43,388.92 KB,0.124936%) IndexBlock(43,580.58 KB,0.186504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:15:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:32 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:32 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:34 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:35 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:36 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:36.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:37 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:37 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:38 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:39 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:40.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:40 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:42 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:42 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:43 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:44 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:45 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:46 np0005592158 podman[229418]: 2026-01-22 14:15:46.103060363 +0000 UTC m=+0.087070305 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:15:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:46 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:46 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.450 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:15:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:47 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:48 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:50 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:52 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:52 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:54 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:54.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:15:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:56.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:15:56 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:57.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:57 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:57 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:58.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:15:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:59 np0005592158 podman[229444]: 2026-01-22 14:15:59.070643051 +0000 UTC m=+0.056369038 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:15:59 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:16:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:00.411 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:16:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:00.412 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:16:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:16:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:00.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:16:00 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:01 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:02.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:03 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:04 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:04.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:05.415 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:16:05 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:07 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:08 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:10 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:10.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.793480) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371793928, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1482, "num_deletes": 251, "total_data_size": 2779374, "memory_usage": 2815816, "flush_reason": "Manual Compaction"}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371803517, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 1150158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39938, "largest_seqno": 41415, "table_properties": {"data_size": 1145332, "index_size": 2030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14943, "raw_average_key_size": 21, "raw_value_size": 1133865, "raw_average_value_size": 1652, "num_data_blocks": 88, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091274, "oldest_key_time": 1769091274, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 10092 microseconds, and 5026 cpu microseconds.
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.803578) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 1150158 bytes OK
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.803604) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805020) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805034) EVENT_LOG_v1 {"time_micros": 1769091371805030, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805054) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2772229, prev total WAL file size 2772229, number of live WAL files 2.
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(1123KB)], [75(10MB)]
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371805987, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 11746259, "oldest_snapshot_seqno": -1}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 8187 keys, 8509732 bytes, temperature: kUnknown
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371858165, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 8509732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8463294, "index_size": 24886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 218105, "raw_average_key_size": 26, "raw_value_size": 8322082, "raw_average_value_size": 1016, "num_data_blocks": 953, "num_entries": 8187, "num_filter_entries": 8187, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.858869) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8509732 bytes
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.860344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 162.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.6) write-amplify(7.4) OK, records in: 8659, records dropped: 472 output_compression: NoCompression
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.860360) EVENT_LOG_v1 {"time_micros": 1769091371860352, "job": 46, "event": "compaction_finished", "compaction_time_micros": 52264, "compaction_time_cpu_micros": 24719, "output_level": 6, "num_output_files": 1, "total_output_size": 8509732, "num_input_records": 8659, "num_output_records": 8187, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371860782, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371862942, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.805861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.862996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:16:11.863009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:13 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:14 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:14.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:16 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:17 np0005592158 podman[229463]: 2026-01-22 14:16:17.095662035 +0000 UTC m=+0.086527631 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 09:16:17 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:17 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:18 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:18 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:21 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:22 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:24 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:25 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:27 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:28 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:28 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:28 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:30 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:30 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:30 np0005592158 podman[229490]: 2026-01-22 14:16:30.061569975 +0000 UTC m=+0.051402802 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 09:16:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:31 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:32 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:32 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:32.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:33.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:33 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:34 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:36 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:36.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:37 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:37 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:38 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:39 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:40.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:41 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:42 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:43.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:43 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:43 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:44.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:44 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:16:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:16:44 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:45.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:46 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:46.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:47 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:47.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:16:47.452 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:48 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:48 np0005592158 podman[229759]: 2026-01-22 14:16:48.102788647 +0000 UTC m=+0.087983260 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 09:16:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:48.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:49 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:50 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:16:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:16:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:51.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:51 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:52 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:52 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:53.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:53 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:54 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:55 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:56 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:57.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:57 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:57 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:16:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:59 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:01 np0005592158 podman[229836]: 2026-01-22 14:17:01.061959076 +0000 UTC m=+0.054082395 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:17:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:02 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:04 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:05 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:05 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:06 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:17:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:08 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:09.328 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:17:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:09.329 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:17:09 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:10 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:10.331 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:17:10 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:11 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:12 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:12 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:13 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:14.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:14 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:15.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:16 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:17 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:17 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:18 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:18 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:19 np0005592158 podman[229855]: 2026-01-22 14:17:19.101009868 +0000 UTC m=+0.088026162 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:17:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:19 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:21 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:22 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:23 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:23 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:24 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:26 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:28 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:29 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:30 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:31 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:32 np0005592158 podman[229881]: 2026-01-22 14:17:32.08591029 +0000 UTC m=+0.069612938 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:17:32 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:32 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:33 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.284653) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454284734, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1242, "num_deletes": 251, "total_data_size": 2383867, "memory_usage": 2420128, "flush_reason": "Manual Compaction"}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454296943, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1568060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41420, "largest_seqno": 42657, "table_properties": {"data_size": 1562724, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13558, "raw_average_key_size": 21, "raw_value_size": 1551283, "raw_average_value_size": 2405, "num_data_blocks": 111, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091372, "oldest_key_time": 1769091372, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 12298 microseconds, and 4985 cpu microseconds.
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.296990) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1568060 bytes OK
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.297016) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298634) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298649) EVENT_LOG_v1 {"time_micros": 1769091454298644, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.298689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2377682, prev total WAL file size 2377682, number of live WAL files 2.
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1531KB)], [78(8310KB)]
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454299789, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10077792, "oldest_snapshot_seqno": -1}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 8315 keys, 8450538 bytes, temperature: kUnknown
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454354051, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8450538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403399, "index_size": 25267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 222093, "raw_average_key_size": 26, "raw_value_size": 8259917, "raw_average_value_size": 993, "num_data_blocks": 963, "num_entries": 8315, "num_filter_entries": 8315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.354372) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8450538 bytes
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.355641) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 155.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 8832, records dropped: 517 output_compression: NoCompression
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.355952) EVENT_LOG_v1 {"time_micros": 1769091454355651, "job": 48, "event": "compaction_finished", "compaction_time_micros": 54373, "compaction_time_cpu_micros": 23417, "output_level": 6, "num_output_files": 1, "total_output_size": 8450538, "num_input_records": 8832, "num_output_records": 8315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454356426, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454358459, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.299564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:17:34.358515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:34.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:35 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:36 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:36.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:37 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:37 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:39 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:40 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:40.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:42 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:43 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:44 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:44.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:45 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:47 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.453 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.454 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:17:47.454 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:48 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:49 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:50 np0005592158 podman[229901]: 2026-01-22 14:17:50.151817596 +0000 UTC m=+0.148639605 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:17:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:17:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:50.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:17:50 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:52 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:52.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:53.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:55 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:56.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:57 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:17:58 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:58 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:58.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:17:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:17:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:00.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:01.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:18:01 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:18:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:02 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:02 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:02.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:03 np0005592158 podman[230056]: 2026-01-22 14:18:03.084151457 +0000 UTC m=+0.066538746 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:18:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:03.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:03 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:18:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:05 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:06.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:07.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:07 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:08.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:09.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:09 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:10.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.863840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491863876, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 726, "num_deletes": 255, "total_data_size": 1207956, "memory_usage": 1229768, "flush_reason": "Manual Compaction"}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491870792, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 784551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42662, "largest_seqno": 43383, "table_properties": {"data_size": 780920, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9036, "raw_average_key_size": 19, "raw_value_size": 773296, "raw_average_value_size": 1703, "num_data_blocks": 61, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091454, "oldest_key_time": 1769091454, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7037 microseconds, and 3469 cpu microseconds.
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870867) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 784551 bytes OK
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.870899) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872290) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872312) EVENT_LOG_v1 {"time_micros": 1769091491872304, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1203891, prev total WAL file size 1203891, number of live WAL files 2.
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.873289) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(766KB)], [81(8252KB)]
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491873367, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 9235089, "oldest_snapshot_seqno": -1}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 8244 keys, 9067902 bytes, temperature: kUnknown
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491923977, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 9067902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9020306, "index_size": 25852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 221876, "raw_average_key_size": 26, "raw_value_size": 8877151, "raw_average_value_size": 1076, "num_data_blocks": 985, "num_entries": 8244, "num_filter_entries": 8244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.924259) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9067902 bytes
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.925736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 178.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 8769, records dropped: 525 output_compression: NoCompression
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.925756) EVENT_LOG_v1 {"time_micros": 1769091491925747, "job": 50, "event": "compaction_finished", "compaction_time_micros": 50688, "compaction_time_cpu_micros": 27733, "output_level": 6, "num_output_files": 1, "total_output_size": 9067902, "num_input_records": 8769, "num_output_records": 8244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491926124, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491927704, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.873104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:18:11.928130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:12.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:13.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:13 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:13 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:14.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:16.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:18.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:20 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:21 np0005592158 podman[230126]: 2026-01-22 14:18:21.101714916 +0000 UTC m=+0.095883895 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 09:18:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:22 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:22 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:25 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:18:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:18:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:27 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:30 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:32.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:33 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:34 np0005592158 podman[230152]: 2026-01-22 14:18:34.056865525 +0000 UTC m=+0.047068605 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:18:34 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:34 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:34.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:35 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:36 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:37 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:39 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:40 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:41 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:42 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:42 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:43 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:44 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:45 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:46 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:46.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:47 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:47 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:18:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:48 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:49 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:50 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:51 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:52 np0005592158 podman[230172]: 2026-01-22 14:18:52.081931645 +0000 UTC m=+0.075560022 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 22 09:18:52 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:53 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:53 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:54 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:54.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:55 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:56 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:57 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:58 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:58 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:59 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:18:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:00 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:01 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:02 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:02 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:02.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:03 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:04 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:04.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:05 np0005592158 podman[230198]: 2026-01-22 14:19:05.069068427 +0000 UTC m=+0.064321815 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:19:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 09:19:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 09:19:05 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:06.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:07.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:07 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:07 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:08 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:10 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:11 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:12 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:13 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:13 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:19:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:19:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:14 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:15.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:15 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:16 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:17 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:18 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:19 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:20 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:21.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:22 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:23 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:23 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592158 podman[230397]: 2026-01-22 14:19:23.094496562 +0000 UTC m=+0.085179844 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:19:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:24 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:25.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:25 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:26 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:27.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:27 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:27 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:28 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:29.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:29 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:30 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:31.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:31 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:33.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:33 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:34 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:34 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:35.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:35 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:36 np0005592158 podman[230423]: 2026-01-22 14:19:36.063503989 +0000 UTC m=+0.052628285 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:19:36 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:37 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:38 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:39 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:40 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:41.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:41 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:42 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:42 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:43.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:43 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:44 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:45 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:46 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:47.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.455 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:19:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:47 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:47 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:48 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:49 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:50 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:51 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:52 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:52 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:53 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:54 np0005592158 podman[230444]: 2026-01-22 14:19:54.131500363 +0000 UTC m=+0.103359740 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:19:54 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:55 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:56 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:57.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:58 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:58 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:59.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:59 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:19:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:00 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:01 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:02 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:02 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:04 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:05.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:05 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:06 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:07 np0005592158 podman[230473]: 2026-01-22 14:20:07.071388956 +0000 UTC m=+0.060813750 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:20:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:07 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:07 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:09 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:10 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:12 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:12 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:13 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:14 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:15 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:16 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:17 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:18 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:19.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:19 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:20 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:21 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:20:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:20:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:22 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:23 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:24 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592158 podman[230626]: 2026-01-22 14:20:25.108779708 +0000 UTC m=+0.105048767 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:20:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:25 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:20:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:20:26 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.914627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627914771, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2000, "num_deletes": 251, "total_data_size": 3801160, "memory_usage": 3860928, "flush_reason": "Manual Compaction"}
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627929338, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 2486047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43388, "largest_seqno": 45383, "table_properties": {"data_size": 2478446, "index_size": 4159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19604, "raw_average_key_size": 21, "raw_value_size": 2461643, "raw_average_value_size": 2664, "num_data_blocks": 180, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091492, "oldest_key_time": 1769091492, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 14718 microseconds, and 6636 cpu microseconds.
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.929410) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 2486047 bytes OK
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.929439) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930941) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930960) EVENT_LOG_v1 {"time_micros": 1769091627930954, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.930982) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3791865, prev total WAL file size 3791865, number of live WAL files 2.
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.932356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(2427KB)], [84(8855KB)]
Jan 22 09:20:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627932423, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11553949, "oldest_snapshot_seqno": -1}
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 8653 keys, 9903981 bytes, temperature: kUnknown
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628000367, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9903981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9853360, "index_size": 27853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 232018, "raw_average_key_size": 26, "raw_value_size": 9702513, "raw_average_value_size": 1121, "num_data_blocks": 1064, "num_entries": 8653, "num_filter_entries": 8653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.000885) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9903981 bytes
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.002272) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 145.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 9168, records dropped: 515 output_compression: NoCompression
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.002295) EVENT_LOG_v1 {"time_micros": 1769091628002284, "job": 52, "event": "compaction_finished", "compaction_time_micros": 68202, "compaction_time_cpu_micros": 33140, "output_level": 6, "num_output_files": 1, "total_output_size": 9903981, "num_input_records": 9168, "num_output_records": 8653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628003568, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628006276, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:27.932267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:20:28.006630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:28 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:29.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:30 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:31 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:32 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:32 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:33 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:34 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:35.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:35 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:36 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:37 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:37 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:38 np0005592158 podman[230703]: 2026-01-22 14:20:38.071572376 +0000 UTC m=+0.066030831 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:20:38 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:39 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:40 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:41 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:42 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:43 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:43 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:43.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:44 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:45 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.456 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:20:47.457 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:47 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:48 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:48 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:49.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:50 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:51.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:51 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:52 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:52 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:53.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:53 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:54 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:55 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:56 np0005592158 podman[230722]: 2026-01-22 14:20:56.115237836 +0000 UTC m=+0.104202592 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:20:56 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:57.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:57 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:57 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:58 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:59.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:20:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:59.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:59 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:01 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:01.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:02 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:03 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:03 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:03.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:04 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:04 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:05.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:06 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:07 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:07.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:08 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:08 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592158 podman[230748]: 2026-01-22 14:21:09.099392437 +0000 UTC m=+0.080588609 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:21:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:09.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:09 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:10 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:11 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:21:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:11.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:12 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:13 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:13 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:14 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:15 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:15.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:16 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:17 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:18 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:19 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:21 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:22 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:22 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:23.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:23 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:23 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:24 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:25.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:25 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:26 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592158 podman[230768]: 2026-01-22 14:21:27.10384487 +0000 UTC m=+0.097397678 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 09:21:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:27 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:29.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:29 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:21:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:21:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:21:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:29.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:30 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:31 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:32 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:33 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:33 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:35 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:36 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:36 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:37.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:38 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:39 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:39.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:40 np0005592158 podman[230975]: 2026-01-22 14:21:40.068009537 +0000 UTC m=+0.054997721 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:21:40 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:40 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.699 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:21:40 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.700 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:21:40 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:40.700 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:21:41 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:41.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:42 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:43.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:43 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:43 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:44 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:45.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:45 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:46 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:21:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:21:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.458 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:21:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:47 np0005592158 ceph-mon[81715]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:47 np0005592158 ceph-mon[81715]: Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:48 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:49.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:49 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:49.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:50 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:51.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:51 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:52 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:52 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:53.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:53 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:53 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:55 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:56 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:57 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:58 np0005592158 podman[230994]: 2026-01-22 14:21:58.096099592 +0000 UTC m=+0.085163663 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:21:58 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:59 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:21:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:00 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:01 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:02 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:03.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:03 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:04 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:05 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:06.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:06 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:07.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:07 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:08.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:08 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:09.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:09 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:10.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:10 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:11 np0005592158 podman[231021]: 2026-01-22 14:22:11.070799294 +0000 UTC m=+0.057969982 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:22:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.004000108s ======
Jan 22 09:22:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:12.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000108s
Jan 22 09:22:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:13.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:14 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:15 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:16 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:17 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:17.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:22:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:22:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:19 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:20 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:21.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:21 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:22 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:23 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:23 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:24 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:25 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:26 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:27.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:27 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:28 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:29 np0005592158 podman[231041]: 2026-01-22 14:22:29.091607582 +0000 UTC m=+0.085805420 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true)
Jan 22 09:22:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:29 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:30 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:31.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:31 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:33 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:33 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:33.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:34 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:34 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:35 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:36 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:22:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:22:37 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:37.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:38 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:38 np0005592158 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:39 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:22:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:22:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:40 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:41 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592158 podman[231220]: 2026-01-22 14:22:41.536561909 +0000 UTC m=+0.048815162 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:22:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:42.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:42 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:43 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:43 np0005592158 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:43 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.891 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:22:43 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.892 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:22:43 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:43.892 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:22:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:44 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:45 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:46.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:46 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.290934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767291031, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 256, "total_data_size": 3945138, "memory_usage": 4010640, "flush_reason": "Manual Compaction"}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767307899, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 2581347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45388, "largest_seqno": 47425, "table_properties": {"data_size": 2573555, "index_size": 4350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19900, "raw_average_key_size": 21, "raw_value_size": 2556335, "raw_average_value_size": 2699, "num_data_blocks": 188, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091627, "oldest_key_time": 1769091627, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 16994 microseconds, and 7483 cpu microseconds.
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.307953) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 2581347 bytes OK
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.307975) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309571) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309585) EVENT_LOG_v1 {"time_micros": 1769091767309581, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309604) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3935628, prev total WAL file size 3935628, number of live WAL files 2.
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.310619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(2520KB)], [87(9671KB)]
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767310733, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 12485328, "oldest_snapshot_seqno": -1}
Jan 22 09:22:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 9075 keys, 12329526 bytes, temperature: kUnknown
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767373884, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12329526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12274142, "index_size": 31592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 242739, "raw_average_key_size": 26, "raw_value_size": 12113946, "raw_average_value_size": 1334, "num_data_blocks": 1217, "num_entries": 9075, "num_filter_entries": 9075, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.374205) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12329526 bytes
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.375484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 195.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 9600, records dropped: 525 output_compression: NoCompression
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.375500) EVENT_LOG_v1 {"time_micros": 1769091767375492, "job": 54, "event": "compaction_finished", "compaction_time_micros": 63140, "compaction_time_cpu_micros": 29994, "output_level": 6, "num_output_files": 1, "total_output_size": 12329526, "num_input_records": 9600, "num_output_records": 9075, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767376103, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767377876, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.310523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:22:47.378045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:47 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:22:47.459 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:48 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:49 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:50 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:51 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:52 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:52 np0005592158 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:53 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:54 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:55.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:55 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:56.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:56 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:57 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:57 np0005592158 ceph-mon[81715]: Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:58 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:22:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:59 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:00 np0005592158 podman[231266]: 2026-01-22 14:23:00.092471629 +0000 UTC m=+0.083525549 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:23:00 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:01.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:01 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:02 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:03 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:04 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:05 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:06 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:07.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:07 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:08 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:09 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:10.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:10 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:11 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:12 np0005592158 podman[231292]: 2026-01-22 14:23:12.064681555 +0000 UTC m=+0.058295040 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:23:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:12 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:12 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:13 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:14 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:15.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:15 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:16 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:17.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.986308) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797986347, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 661, "num_deletes": 251, "total_data_size": 873640, "memory_usage": 886184, "flush_reason": "Manual Compaction"}
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797991804, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 573365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47430, "largest_seqno": 48086, "table_properties": {"data_size": 570254, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8063, "raw_average_key_size": 19, "raw_value_size": 563778, "raw_average_value_size": 1375, "num_data_blocks": 42, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091767, "oldest_key_time": 1769091767, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5529 microseconds, and 2471 cpu microseconds.
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991839) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 573365 bytes OK
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.991855) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.992984) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.992999) EVENT_LOG_v1 {"time_micros": 1769091797992993, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 869950, prev total WAL file size 869950, number of live WAL files 2.
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(559KB)], [90(11MB)]
Jan 22 09:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797993643, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 12902891, "oldest_snapshot_seqno": -1}
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8975 keys, 11173901 bytes, temperature: kUnknown
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798053075, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 11173901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11120117, "index_size": 30248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 241548, "raw_average_key_size": 26, "raw_value_size": 10962302, "raw_average_value_size": 1221, "num_data_blocks": 1156, "num_entries": 8975, "num_filter_entries": 8975, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.053556) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11173901 bytes
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.055411) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.6 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(42.0) write-amplify(19.5) OK, records in: 9485, records dropped: 510 output_compression: NoCompression
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.055442) EVENT_LOG_v1 {"time_micros": 1769091798055428, "job": 56, "event": "compaction_finished", "compaction_time_micros": 59569, "compaction_time_cpu_micros": 27763, "output_level": 6, "num_output_files": 1, "total_output_size": 11173901, "num_input_records": 9485, "num_output_records": 8975, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798055890, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798060504, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:17.993427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:23:18.060591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:18 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:20 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:21 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:21.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:22 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:23 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:23 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:24 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:25.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:25 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:26.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:26 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:27 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:28.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:28 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:29.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:29 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:30.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:30 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:31 np0005592158 podman[231311]: 2026-01-22 14:23:31.0935909 +0000 UTC m=+0.084165708 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:23:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:31.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:31 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:32 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:33 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:34.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:34 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:34 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:35.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:35 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:36 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:37 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:38 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:39.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:39 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:40 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:41.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:41 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:42 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:43 np0005592158 podman[231468]: 2026-01-22 14:23:43.07175197 +0000 UTC m=+0.060296710 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 09:23:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:23:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:45.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:45 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:46 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.460 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:23:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:47 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:47 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:48 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:49 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:50 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:51 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:52 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:53 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:54 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:55 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:56 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:23:56.086 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:23:56 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:23:56.087 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:23:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:56 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:57 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:58 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:23:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:59 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:00.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:00 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:01 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:02 np0005592158 podman[231537]: 2026-01-22 14:24:02.124276892 +0000 UTC m=+0.110299438 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:24:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:02 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:02 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:03.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:03 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:04 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:05 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:06 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:24:06.090 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:24:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:06 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:07 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:08 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:10 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:11 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:12 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:13 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:14 np0005592158 podman[231563]: 2026-01-22 14:24:14.058822195 +0000 UTC m=+0.053880365 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:24:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:14 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:15 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:16 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:17 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:18 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:19.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:19 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:20 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:21 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:23 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:23 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:24 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:25 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:26 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:27 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:28 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:28 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:29 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:30 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:31 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:31.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:32.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.324806) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872324885, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1245, "num_deletes": 251, "total_data_size": 2151052, "memory_usage": 2174488, "flush_reason": "Manual Compaction"}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872335911, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 922025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48092, "largest_seqno": 49331, "table_properties": {"data_size": 917757, "index_size": 1664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13046, "raw_average_key_size": 21, "raw_value_size": 907787, "raw_average_value_size": 1500, "num_data_blocks": 72, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091798, "oldest_key_time": 1769091798, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 11133 microseconds, and 3779 cpu microseconds.
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335952) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 922025 bytes OK
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.335969) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337223) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337236) EVENT_LOG_v1 {"time_micros": 1769091872337232, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.337253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2144918, prev total WAL file size 2144918, number of live WAL files 2.
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.338135) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(900KB)], [93(10MB)]
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872338169, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 12095926, "oldest_snapshot_seqno": -1}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 9096 keys, 8667682 bytes, temperature: kUnknown
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872397217, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 8667682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8617179, "index_size": 26647, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 244746, "raw_average_key_size": 26, "raw_value_size": 8461301, "raw_average_value_size": 930, "num_data_blocks": 1006, "num_entries": 9096, "num_filter_entries": 9096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.398170) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8667682 bytes
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.399281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.6 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(22.5) write-amplify(9.4) OK, records in: 9580, records dropped: 484 output_compression: NoCompression
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.399304) EVENT_LOG_v1 {"time_micros": 1769091872399293, "job": 58, "event": "compaction_finished", "compaction_time_micros": 59111, "compaction_time_cpu_micros": 23373, "output_level": 6, "num_output_files": 1, "total_output_size": 8667682, "num_input_records": 9580, "num_output_records": 9096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872399682, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872402629, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.338075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:24:32.402943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:33 np0005592158 podman[231583]: 2026-01-22 14:24:33.098551521 +0000 UTC m=+0.084582739 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:24:33 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:33 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:33.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:34.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:34 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:24:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.5 total, 600.0 interval#012Cumulative writes: 9281 writes, 33K keys, 9281 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9281 writes, 2431 syncs, 3.82 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 889 writes, 2183 keys, 889 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s#012Interval WAL: 889 writes, 406 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:24:35 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:36.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:36 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:37 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:38.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:38 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:38 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:39 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:39.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:40 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:41 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:41.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:42 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:43 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:43 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:43.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:44 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:45 np0005592158 podman[231609]: 2026-01-22 14:24:45.088869637 +0000 UTC m=+0.071812523 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 09:24:45 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:45.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:46 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:47 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.461 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:24:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:47.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:48 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:49.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:50 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592158 podman[231800]: 2026-01-22 14:24:51.054549283 +0000 UTC m=+0.063774864 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:24:51 np0005592158 podman[231800]: 2026-01-22 14:24:51.154312594 +0000 UTC m=+0.163538065 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:24:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:51.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:51 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:52 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:53.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:53 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:24:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:24:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:54 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:55 np0005592158 ceph-mon[81715]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:56.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:57 np0005592158 ceph-mon[81715]: Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:58 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:24:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:59.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:00 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:03 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:04 np0005592158 podman[232100]: 2026-01-22 14:25:04.106371972 +0000 UTC m=+0.096561786 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:25:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:07.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:10.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:25:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:11.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:25:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:12.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:13.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:14.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:15.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:16 np0005592158 podman[232126]: 2026-01-22 14:25:16.05954569 +0000 UTC m=+0.050088331 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 09:25:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:16.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:17 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:17.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:18 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:25:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:25:19 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:19.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:22.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:23.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:24.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:26.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:27 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:28 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:29.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:29 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:31.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:25:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 9128 writes, 50K keys, 9128 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 9127 writes, 9127 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1807 writes, 9447 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 16.36 MB, 0.03 MB/s#012Interval WAL: 1806 writes, 1806 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.7      0.97              0.17        29    0.034       0      0       0.0       0.0#012  L6      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    141.9    119.6      2.06              0.70        28    0.074    199K    15K       0.0       0.0#012 Sum      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6     96.4     99.1      3.04              0.88        57    0.053    199K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    157.4    153.7      0.49              0.22        14    0.035     64K   3548       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    141.9    119.6      2.06              0.70        28    0.074    199K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.8      0.97              0.17        28    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.10 MB/s write, 0.29 GB read, 0.10 MB/s read, 3.0 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 31.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00021 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1661,30.10 MB,9.90208%) FilterBlock(57,569.98 KB,0.1831%) IndexBlock(57,805.67 KB,0.258812%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:25:31 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:32 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:32 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:33 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:34 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:35 np0005592158 podman[232145]: 2026-01-22 14:25:35.135451029 +0000 UTC m=+0.121127343 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:25:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:35.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:25:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:25:37 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:38.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:38 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:39 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:40.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:41.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:41 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:43 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:43 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:44 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:45 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:47 np0005592158 podman[232172]: 2026-01-22 14:25:47.068170521 +0000 UTC m=+0.053776202 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 09:25:47 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.462 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.463 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:25:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:47.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:48 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:48 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:49.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:50 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:51 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:51.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:52 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:53 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:53.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:25:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:25:54 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:57 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:57.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:25:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:25:58 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:25:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:59.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.831136) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959831186, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 251, "total_data_size": 2521159, "memory_usage": 2571232, "flush_reason": "Manual Compaction"}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959846455, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 1644723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49336, "largest_seqno": 50768, "table_properties": {"data_size": 1639072, "index_size": 2791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14729, "raw_average_key_size": 20, "raw_value_size": 1626619, "raw_average_value_size": 2303, "num_data_blocks": 120, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091873, "oldest_key_time": 1769091873, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15411 microseconds, and 6658 cpu microseconds.
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.846552) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 1644723 bytes OK
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.846575) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853859) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853911) EVENT_LOG_v1 {"time_micros": 1769091959853900, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.853937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2514245, prev total WAL file size 2514245, number of live WAL files 2.
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.855124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(1606KB)], [96(8464KB)]
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959855209, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 10312405, "oldest_snapshot_seqno": -1}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 9285 keys, 8614300 bytes, temperature: kUnknown
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959901542, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 8614300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8562886, "index_size": 27110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 249905, "raw_average_key_size": 26, "raw_value_size": 8403895, "raw_average_value_size": 905, "num_data_blocks": 1020, "num_entries": 9285, "num_filter_entries": 9285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901904) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8614300 bytes
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.903553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 9802, records dropped: 517 output_compression: NoCompression
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.903572) EVENT_LOG_v1 {"time_micros": 1769091959903563, "job": 60, "event": "compaction_finished", "compaction_time_micros": 46491, "compaction_time_cpu_micros": 24816, "output_level": 6, "num_output_files": 1, "total_output_size": 8614300, "num_input_records": 9802, "num_output_records": 9285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959903973, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959905530, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.855023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:25:59.905636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:26:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:00 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:26:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:26:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:01.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:03.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592158 podman[232346]: 2026-01-22 14:26:05.587583656 +0000 UTC m=+0.080577361 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:26:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:05.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:07.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:11.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:11 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:13.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:18 np0005592158 podman[232400]: 2026-01-22 14:26:18.064304794 +0000 UTC m=+0.065937794 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:26:18 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:19 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:19.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:21 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:21.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:22.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:27 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:29 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:31 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:32 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:32.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:33 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:33 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:34 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:35.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:36 np0005592158 podman[232420]: 2026-01-22 14:26:36.143608368 +0000 UTC m=+0.129149141 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:26:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:38 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:38 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:39 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:39.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:41 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:42 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:43 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:43 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:44 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:45 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:26:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:47 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:47 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:48 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:48 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:49 np0005592158 podman[232446]: 2026-01-22 14:26:49.0587265 +0000 UTC m=+0.051587823 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:26:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:49.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:51 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:51.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:52 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:53 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:53 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:53.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:54 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:55.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:57 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:26:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:26:58 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:26:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:00.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:00 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:27:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:03 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:03 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:06 np0005592158 podman[232584]: 2026-01-22 14:27:06.339420119 +0000 UTC m=+0.161614253 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:27:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:06.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.371354) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027371411, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1161, "num_deletes": 256, "total_data_size": 1970462, "memory_usage": 1996448, "flush_reason": "Manual Compaction"}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027380313, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50773, "largest_seqno": 51929, "table_properties": {"data_size": 1289998, "index_size": 2212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12371, "raw_average_key_size": 20, "raw_value_size": 1279428, "raw_average_value_size": 2100, "num_data_blocks": 95, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091960, "oldest_key_time": 1769091960, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 8994 microseconds, and 4142 cpu microseconds.
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380362) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1294814 bytes OK
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.380385) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381578) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381594) EVENT_LOG_v1 {"time_micros": 1769092027381589, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.381611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1964647, prev total WAL file size 1964647, number of live WAL files 2.
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.382277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1264KB)], [99(8412KB)]
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027382349, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 9909114, "oldest_snapshot_seqno": -1}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 9367 keys, 9739902 bytes, temperature: kUnknown
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027429178, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 9739902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9686831, "index_size": 28575, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 252989, "raw_average_key_size": 27, "raw_value_size": 9525152, "raw_average_value_size": 1016, "num_data_blocks": 1078, "num_entries": 9367, "num_filter_entries": 9367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.429435) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9739902 bytes
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.430876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.2 rd, 207.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 9894, records dropped: 527 output_compression: NoCompression
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.430891) EVENT_LOG_v1 {"time_micros": 1769092027430883, "job": 62, "event": "compaction_finished", "compaction_time_micros": 46913, "compaction_time_cpu_micros": 23364, "output_level": 6, "num_output_files": 1, "total_output_size": 9739902, "num_input_records": 9894, "num_output_records": 9367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027431336, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027432997, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.382161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:27:07.433118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:07.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:27:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:08.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:09.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:11 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:12 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:14.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:14.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:16.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:16.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:17 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:17 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:18.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:27:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:27:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:27:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:27:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:19 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:20 np0005592158 podman[232793]: 2026-01-22 14:27:20.07873668 +0000 UTC m=+0.065817069 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:27:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:23 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:27:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:27:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:26.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:27 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:28 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:28 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:29 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:31 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:32 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:33 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:33 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:34 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:37 np0005592158 podman[232813]: 2026-01-22 14:27:37.097306735 +0000 UTC m=+0.083971294 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:27:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:38 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:40.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:41 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:42 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:42.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:43 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:43 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:27:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:27:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:44 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:45 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.464 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:27:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:48.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:48 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:48 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:48.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:51 np0005592158 podman[232841]: 2026-01-22 14:27:51.067646637 +0000 UTC m=+0.051661475 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:27:51 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:27:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:52 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:52 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:54 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:54 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:27:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:28:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:28:00 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:02 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:04.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:08 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:08 np0005592158 podman[232860]: 2026-01-22 14:28:08.147572449 +0000 UTC m=+0.137239881 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:28:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:10.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:12.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:14.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:14.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:16.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:16.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:28:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:28:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:28:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:18.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:22 np0005592158 podman[233020]: 2026-01-22 14:28:22.072416414 +0000 UTC m=+0.061450441 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:28:22 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.087 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:22 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.088 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:28:22 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:22.089 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:22.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:26.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:26.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:27 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:28.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:28 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:29 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:30.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:31 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:32 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:32 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:33 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:34.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:34.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:37 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:39 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:39 np0005592158 podman[233089]: 2026-01-22 14:28:39.11788961 +0000 UTC m=+0.100559204 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:28:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:40.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:41 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:42.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:42.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:42 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:43 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:43 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:44.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:44 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:45 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.465 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:28:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:47 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.644533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129644573, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1559, "num_deletes": 251, "total_data_size": 3044666, "memory_usage": 3088896, "flush_reason": "Manual Compaction"}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129656692, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1979886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51934, "largest_seqno": 53488, "table_properties": {"data_size": 1973541, "index_size": 3356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16179, "raw_average_key_size": 21, "raw_value_size": 1959839, "raw_average_value_size": 2561, "num_data_blocks": 145, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092027, "oldest_key_time": 1769092027, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 12193 microseconds, and 5667 cpu microseconds.
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.656726) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1979886 bytes OK
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.656744) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657926) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657936) EVENT_LOG_v1 {"time_micros": 1769092129657933, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.657952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3037162, prev total WAL file size 3037162, number of live WAL files 2.
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.658741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1933KB)], [102(9511KB)]
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129658816, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 11719788, "oldest_snapshot_seqno": -1}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 9615 keys, 10083242 bytes, temperature: kUnknown
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129727407, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 10083242, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10028388, "index_size": 29718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 259598, "raw_average_key_size": 26, "raw_value_size": 9862288, "raw_average_value_size": 1025, "num_data_blocks": 1122, "num_entries": 9615, "num_filter_entries": 9615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.727786) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10083242 bytes
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.729180) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.6 rd, 146.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10132, records dropped: 517 output_compression: NoCompression
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.729200) EVENT_LOG_v1 {"time_micros": 1769092129729188, "job": 64, "event": "compaction_finished", "compaction_time_micros": 68695, "compaction_time_cpu_micros": 25758, "output_level": 6, "num_output_files": 1, "total_output_size": 10083242, "num_input_records": 10132, "num_output_records": 9615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129729761, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129731314, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.658626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:28:49.731428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:50.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:50 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:52.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:52 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:53 np0005592158 podman[233116]: 2026-01-22 14:28:53.065801891 +0000 UTC m=+0.056501397 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:28:53 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:54.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:56.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:56.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:57 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:58.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:28:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:58.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:58 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:28:58 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3247440006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:28:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:00.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:00.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:02.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:03 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:04.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:08 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:10 np0005592158 podman[233136]: 2026-01-22 14:29:10.135034462 +0000 UTC m=+0.116524487 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 09:29:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:11 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:13 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:16 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:16.337 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:16 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:16.339 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:29:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:17 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:29:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:29:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:29:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:29:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:18.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:19 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:21 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:22.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:24 np0005592158 podman[233162]: 2026-01-22 14:29:24.064405761 +0000 UTC m=+0.051340467 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:29:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:24.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:24.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:26.341 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:26.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:29:27 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:28 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:29 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:30.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:30 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:31 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:33 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:34 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:34 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:35 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:36 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:38 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:38.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:38.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:39 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:40 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:41 np0005592158 podman[233365]: 2026-01-22 14:29:41.139298866 +0000 UTC m=+0.122310255 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:29:41 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:42 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:42.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:43 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:43 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:44.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:44 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:45 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:46.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:46 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.466 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:29:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:47 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:47 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:50 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:51 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:52 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:53 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:53 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:54 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:55 np0005592158 podman[233393]: 2026-01-22 14:29:55.057750086 +0000 UTC m=+0.047936873 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 09:29:55 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:56.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:57 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:29:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:29:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:29:58 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:01 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:02 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:02 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:03 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:04 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:07 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:07.244 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:30:07 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:07.245 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:30:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:08 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:08.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:11 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:11.247 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:30:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592158 podman[233412]: 2026-01-22 14:30:12.143607719 +0000 UTC m=+0.115078818 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:30:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:13 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:14.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:15 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:17 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:18 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:18 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:20 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:20.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:21 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:22 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:23 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:23 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:24 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:24.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:25 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:26 np0005592158 podman[233437]: 2026-01-22 14:30:26.067834808 +0000 UTC m=+0.061099242 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:30:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:28 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:28 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:29 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:30 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:31 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:33 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:34 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:30:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:30:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:35 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:36 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:37 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:38 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:39 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:40.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:40 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:41 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:42.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:42 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:42 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:43 np0005592158 podman[233639]: 2026-01-22 14:30:43.108082462 +0000 UTC m=+0.100060471 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:30:43 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:44.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:44 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:45 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:46 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:30:47.467 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:47 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:47 np0005592158 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:48 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:49 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:50.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:50 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:51 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:52 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592158 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:54 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:55 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:30:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:30:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:56 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592158 podman[233666]: 2026-01-22 14:30:57.082383321 +0000 UTC m=+0.066492959 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:30:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:57 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:30:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:58 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:59 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:00 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:01 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:02 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592158 ceph-mon[81715]: Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:03 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:04.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:04 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:07 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:08 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:08 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:08.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:09 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:10 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:10.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:11 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:12.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:12 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:13 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:14 np0005592158 podman[233685]: 2026-01-22 14:31:14.094777177 +0000 UTC m=+0.088829945 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:31:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:14.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:14 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:15 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:17 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:31:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:31:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:31:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:31:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:18.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:18 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:20.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:20 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:21 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:22.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:22 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:24 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:24 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:24.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:25 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:26.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:28 np0005592158 podman[233711]: 2026-01-22 14:31:28.064572314 +0000 UTC m=+0.050079412 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 22 09:31:28 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:28.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:29 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:30.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:31 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:32 np0005592158 ceph-mon[81715]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:31:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:32.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:33 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:33 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:34.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:34.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:35 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:36.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.291618) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297291649, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2441, "num_deletes": 251, "total_data_size": 4671704, "memory_usage": 4731520, "flush_reason": "Manual Compaction"}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297311192, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 3057164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53493, "largest_seqno": 55929, "table_properties": {"data_size": 3048175, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23266, "raw_average_key_size": 21, "raw_value_size": 3028198, "raw_average_value_size": 2778, "num_data_blocks": 222, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092130, "oldest_key_time": 1769092130, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 19612 microseconds, and 8267 cpu microseconds.
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.311227) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 3057164 bytes OK
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.311245) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312434) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312445) EVENT_LOG_v1 {"time_micros": 1769092297312442, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.312461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 4660582, prev total WAL file size 4660582, number of live WAL files 2.
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.313465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2985KB)], [105(9846KB)]
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297313519, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13140406, "oldest_snapshot_seqno": -1}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 10190 keys, 11564234 bytes, temperature: kUnknown
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297366746, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 11564234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11504810, "index_size": 32816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273402, "raw_average_key_size": 26, "raw_value_size": 11327818, "raw_average_value_size": 1111, "num_data_blocks": 1246, "num_entries": 10190, "num_filter_entries": 10190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.367011) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11564234 bytes
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.368052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.4 rd, 216.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 10705, records dropped: 515 output_compression: NoCompression
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.368070) EVENT_LOG_v1 {"time_micros": 1769092297368061, "job": 66, "event": "compaction_finished", "compaction_time_micros": 53325, "compaction_time_cpu_micros": 27000, "output_level": 6, "num_output_files": 1, "total_output_size": 11564234, "num_input_records": 10705, "num_output_records": 10190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297368727, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297370193, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.313394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:37.370249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:38 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:38 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:38.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:39 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:41 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.464115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302464174, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 258, "total_data_size": 193899, "memory_usage": 201960, "flush_reason": "Manual Compaction"}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302467048, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 127264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55934, "largest_seqno": 56278, "table_properties": {"data_size": 125158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5510, "raw_average_key_size": 18, "raw_value_size": 120743, "raw_average_value_size": 397, "num_data_blocks": 12, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092298, "oldest_key_time": 1769092298, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2952 microseconds, and 1030 cpu microseconds.
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467079) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 127264 bytes OK
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.467091) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469360) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469371) EVENT_LOG_v1 {"time_micros": 1769092302469367, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469386) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 191450, prev total WAL file size 191450, number of live WAL files 2.
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(124KB)], [108(11MB)]
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302469707, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 11691498, "oldest_snapshot_seqno": -1}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 9967 keys, 11552426 bytes, temperature: kUnknown
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302524569, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11552426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11494045, "index_size": 32349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 269773, "raw_average_key_size": 27, "raw_value_size": 11320386, "raw_average_value_size": 1135, "num_data_blocks": 1223, "num_entries": 9967, "num_filter_entries": 9967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.525107) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11552426 bytes
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.527041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.8 rd, 209.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(182.6) write-amplify(90.8) OK, records in: 10494, records dropped: 527 output_compression: NoCompression
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.527065) EVENT_LOG_v1 {"time_micros": 1769092302527054, "job": 68, "event": "compaction_finished", "compaction_time_micros": 55204, "compaction_time_cpu_micros": 25359, "output_level": 6, "num_output_files": 1, "total_output_size": 11552426, "num_input_records": 10494, "num_output_records": 9967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302527751, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302530763, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.469617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:31:42.530993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:43 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:43 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:44 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:45 np0005592158 podman[233862]: 2026-01-22 14:31:45.098932348 +0000 UTC m=+0.088247559 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 09:31:45 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:47 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:31:47.468 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:48 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:48 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:49 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:50 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:51 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:52 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:53 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:53 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:54 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:54.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:31:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:31:55 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:56 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:56.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:57 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:58 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:58.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:31:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:58.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:59 np0005592158 podman[233938]: 2026-01-22 14:31:59.050382347 +0000 UTC m=+0.042510727 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:31:59 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:00 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:00.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:01 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:02 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:02 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:02.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:03 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:04 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:04.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:04.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:05 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:06 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:07 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:07 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:08 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:09 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:10 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:11 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:12 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:12.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:13 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:13 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:14 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:14.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:15 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592158 podman[233957]: 2026-01-22 14:32:16.116605818 +0000 UTC m=+0.107092645 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 09:32:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:16.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:16 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:17 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:18.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:18 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:19 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:20.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:20 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:23 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:23 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:24 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:24.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:25 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:26 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:26.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:26.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:27 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:28 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:28 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:29 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:30 np0005592158 podman[233985]: 2026-01-22 14:32:30.091492322 +0000 UTC m=+0.072499124 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:32:30 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:30.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:31 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:32 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:32.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:33 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:33 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:34 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:35 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:36 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:37 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:38 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:38 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:39 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:40 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:41 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:42 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:43 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:43 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:44 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:45 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 09:32:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:46.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:46 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:47 np0005592158 podman[234004]: 2026-01-22 14:32:47.096214226 +0000 UTC m=+0.085184114 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.469 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.470 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:32:47.470 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:47 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:48.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:48 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:32:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:32:48 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:49 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:50.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:50 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:51 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:52 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:52 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:56 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.484007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377484341, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1257, "num_deletes": 252, "total_data_size": 2148717, "memory_usage": 2179016, "flush_reason": "Manual Compaction"}
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377493941, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 920251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56283, "largest_seqno": 57535, "table_properties": {"data_size": 916035, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13120, "raw_average_key_size": 21, "raw_value_size": 906101, "raw_average_value_size": 1487, "num_data_blocks": 70, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092303, "oldest_key_time": 1769092303, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 9818 microseconds, and 4928 cpu microseconds.
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494089) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 920251 bytes OK
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.494144) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495377) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495390) EVENT_LOG_v1 {"time_micros": 1769092377495386, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.495407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2142534, prev total WAL file size 2142534, number of live WAL files 2.
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.496489) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373538' seq:0, type:0; will stop at (end)
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(898KB)], [111(11MB)]
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377496549, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12472677, "oldest_snapshot_seqno": -1}
Jan 22 09:32:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 10090 keys, 9038002 bytes, temperature: kUnknown
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378054142, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 9038002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8982954, "index_size": 28696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 273005, "raw_average_key_size": 27, "raw_value_size": 8811278, "raw_average_value_size": 873, "num_data_blocks": 1070, "num_entries": 10090, "num_filter_entries": 10090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.055865) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9038002 bytes
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.060477) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.4 rd, 16.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.4) write-amplify(9.8) OK, records in: 10576, records dropped: 486 output_compression: NoCompression
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.060495) EVENT_LOG_v1 {"time_micros": 1769092378060487, "job": 70, "event": "compaction_finished", "compaction_time_micros": 557684, "compaction_time_cpu_micros": 25114, "output_level": 6, "num_output_files": 1, "total_output_size": 9038002, "num_input_records": 10576, "num_output_records": 10090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378061396, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092378064309, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:57.496369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:32:58.064520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:58 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:32:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:59 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:00 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:01 np0005592158 podman[234212]: 2026-01-22 14:33:01.05552091 +0000 UTC m=+0.050426709 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:33:01 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:02 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:03 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:03 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:04 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:04.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:05 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:06 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:07 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:08 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:08 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:08.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:09 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:09.437 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:33:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:09.438 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:33:10 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:10.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:11.440 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:33:12 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:13 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:13 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:14 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:15 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:17 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:17 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:18 np0005592158 podman[234232]: 2026-01-22 14:33:18.09457805 +0000 UTC m=+0.086941762 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 09:33:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:18 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:19 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:20 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:21 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:22.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:22 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:23 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:23 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:24.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:24 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:25 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:26.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:26 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:27 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:28 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:28.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:29 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:30 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:31 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:32 np0005592158 podman[234259]: 2026-01-22 14:33:32.043811095 +0000 UTC m=+0.040017749 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 09:33:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:32.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:32 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:32 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:34.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:35 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:36.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:37 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:38 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:38 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:38.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:39 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:41 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:42.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:43 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:43 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:44 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:44.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:45 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:47 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.471 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.471 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:33:47.472 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:48 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:48 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:48.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:48.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:49 np0005592158 podman[234279]: 2026-01-22 14:33:49.09863345 +0000 UTC m=+0.090510219 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:33:49 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:50 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:50.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:51 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:52 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:53 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:53 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:54 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:55 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:56 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:57 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:33:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 22 09:33:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:33:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:59.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:59 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:00 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:02.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:03 np0005592158 podman[234554]: 2026-01-22 14:34:03.076604886 +0000 UTC m=+0.072133784 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:34:03 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:03 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:03 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:05.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:05 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:06 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:07.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:07 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:08 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:08 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:09 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:09.878 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:34:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:09.879 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:34:10 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:11 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:12 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:12 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:12.881 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:13 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:13 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:14 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:15 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.887841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456887872, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1349, "num_deletes": 251, "total_data_size": 2458298, "memory_usage": 2500680, "flush_reason": "Manual Compaction"}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456898057, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1594051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57540, "largest_seqno": 58884, "table_properties": {"data_size": 1588533, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1576433, "raw_average_value_size": 2314, "num_data_blocks": 118, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092378, "oldest_key_time": 1769092378, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 10267 microseconds, and 4372 cpu microseconds.
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.898105) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1594051 bytes OK
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.898125) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899302) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899317) EVENT_LOG_v1 {"time_micros": 1769092456899312, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899333) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2451712, prev total WAL file size 2451712, number of live WAL files 2.
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.900139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(1556KB)], [114(8826KB)]
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456900184, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 10632053, "oldest_snapshot_seqno": -1}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 10250 keys, 8931758 bytes, temperature: kUnknown
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456943359, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 8931758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8875929, "index_size": 29093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 277564, "raw_average_key_size": 27, "raw_value_size": 8701570, "raw_average_value_size": 848, "num_data_blocks": 1083, "num_entries": 10250, "num_filter_entries": 10250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.943622) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8931758 bytes
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.945500) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.8 rd, 206.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 10771, records dropped: 521 output_compression: NoCompression
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.945516) EVENT_LOG_v1 {"time_micros": 1769092456945507, "job": 72, "event": "compaction_finished", "compaction_time_micros": 43254, "compaction_time_cpu_micros": 21319, "output_level": 6, "num_output_files": 1, "total_output_size": 8931758, "num_input_records": 10771, "num_output_records": 10250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456945845, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456947381, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.899971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:34:16.947457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:17 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:17 np0005592158 ceph-mon[81715]: from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:18 np0005592158 ceph-mon[81715]: from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 09:34:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:20 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:20 np0005592158 podman[234624]: 2026-01-22 14:34:20.096457688 +0000 UTC m=+0.088104183 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:34:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:21 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:21 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:23 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:24 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:26 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:27 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:28 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:29 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:31 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:32 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:34 np0005592158 podman[234650]: 2026-01-22 14:34:34.06381052 +0000 UTC m=+0.055417544 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 22 09:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.5 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3163 syncs, 3.46 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1666 writes, 5174 keys, 1666 commit groups, 1.0 writes per commit group, ingest: 5.52 MB, 0.01 MB/s#012Interval WAL: 1666 writes, 732 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:34:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:36 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:37 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:38 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:39 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:40 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:41 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:42 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:43 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:43 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:45 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:46 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.472 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:34:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:47 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:48 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:49 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:50 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:51 np0005592158 podman[234670]: 2026-01-22 14:34:51.108735628 +0000 UTC m=+0.094367203 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:34:51 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:52 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:52.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:54 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:54.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:55 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:56 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:56.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:57 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:58 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:58 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:58.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:34:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:59 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:00 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:01 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:02 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:02.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:03 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:03 np0005592158 ceph-mon[81715]: Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:04 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:05 np0005592158 podman[234698]: 2026-01-22 14:35:05.085734809 +0000 UTC m=+0.081705682 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 22 09:35:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:05 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:06 np0005592158 podman[234890]: 2026-01-22 14:35:06.372690055 +0000 UTC m=+0.079486281 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:35:06 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:06 np0005592158 podman[234890]: 2026-01-22 14:35:06.487069296 +0000 UTC m=+0.193865532 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:35:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:06.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:08 np0005592158 podman[235286]: 2026-01-22 14:35:08.33539481 +0000 UTC m=+0.044427157 container create c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:35:08 np0005592158 systemd[1]: Started libpod-conmon-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope.
Jan 22 09:35:08 np0005592158 podman[235286]: 2026-01-22 14:35:08.317850728 +0000 UTC m=+0.026883105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:08 np0005592158 systemd[1]: Started libcrun container.
Jan 22 09:35:08 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:08 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:08 np0005592158 podman[235286]: 2026-01-22 14:35:08.43823042 +0000 UTC m=+0.147262797 container init c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 09:35:08 np0005592158 podman[235286]: 2026-01-22 14:35:08.448995799 +0000 UTC m=+0.158028156 container start c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 09:35:08 np0005592158 podman[235286]: 2026-01-22 14:35:08.453129821 +0000 UTC m=+0.162162178 container attach c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:35:08 np0005592158 suspicious_swartz[235302]: 167 167
Jan 22 09:35:08 np0005592158 systemd[1]: libpod-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope: Deactivated successfully.
Jan 22 09:35:08 np0005592158 podman[235307]: 2026-01-22 14:35:08.489538551 +0000 UTC m=+0.025538749 container died c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 09:35:08 np0005592158 systemd[1]: var-lib-containers-storage-overlay-22f0ba300e207645b24e39b449db2e5248126083b603c759ad91eca8b34d8548-merged.mount: Deactivated successfully.
Jan 22 09:35:08 np0005592158 podman[235307]: 2026-01-22 14:35:08.529703932 +0000 UTC m=+0.065704100 container remove c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 22 09:35:08 np0005592158 systemd[1]: libpod-conmon-c3a2192b2315ad15558714250576068917605b5b0815977b4bedffee29034a83.scope: Deactivated successfully.
Jan 22 09:35:08 np0005592158 podman[235326]: 2026-01-22 14:35:08.681718087 +0000 UTC m=+0.042329542 container create 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 09:35:08 np0005592158 systemd[1]: Started libpod-conmon-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope.
Jan 22 09:35:08 np0005592158 systemd[1]: Started libcrun container.
Jan 22 09:35:08 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:08 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:08 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:08 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:35:08 np0005592158 podman[235326]: 2026-01-22 14:35:08.662953911 +0000 UTC m=+0.023565396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:35:08 np0005592158 podman[235326]: 2026-01-22 14:35:08.760868818 +0000 UTC m=+0.121480293 container init 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:35:08 np0005592158 podman[235326]: 2026-01-22 14:35:08.767855426 +0000 UTC m=+0.128466881 container start 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 22 09:35:08 np0005592158 podman[235326]: 2026-01-22 14:35:08.771867124 +0000 UTC m=+0.132478619 container attach 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:35:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:09 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592158 epic_thompson[235342]: [
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:    {
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "available": false,
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "ceph_device": false,
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "lsm_data": {},
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "lvs": [],
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "path": "/dev/sr0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "rejected_reasons": [
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "Insufficient space (<5GB)",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "Has a FileSystem"
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        ],
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        "sys_api": {
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "actuators": null,
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "device_nodes": "sr0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "devname": "sr0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "human_readable_size": "482.00 KB",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "id_bus": "ata",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "model": "QEMU DVD-ROM",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "nr_requests": "2",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "parent": "/dev/sr0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "partitions": {},
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "path": "/dev/sr0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "removable": "1",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "rev": "2.5+",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "ro": "0",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "rotational": "1",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "sas_address": "",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "sas_device_handle": "",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "scheduler_mode": "mq-deadline",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "sectors": 0,
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "sectorsize": "2048",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "size": 493568.0,
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "support_discard": "2048",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "type": "disk",
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:            "vendor": "QEMU"
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:        }
Jan 22 09:35:10 np0005592158 epic_thompson[235342]:    }
Jan 22 09:35:10 np0005592158 epic_thompson[235342]: ]
Jan 22 09:35:10 np0005592158 systemd[1]: libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Deactivated successfully.
Jan 22 09:35:10 np0005592158 systemd[1]: libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Consumed 1.299s CPU time.
Jan 22 09:35:10 np0005592158 conmon[235342]: conmon 37d098f5b84bb883da95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope/container/memory.events
Jan 22 09:35:10 np0005592158 podman[235326]: 2026-01-22 14:35:10.042270585 +0000 UTC m=+1.402882040 container died 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:35:10 np0005592158 systemd[1]: var-lib-containers-storage-overlay-5a6c855817a9ea6e9eb8e37f976284332c63c0a49f3dfcf1a2277e37be0d264c-merged.mount: Deactivated successfully.
Jan 22 09:35:10 np0005592158 podman[235326]: 2026-01-22 14:35:10.102425375 +0000 UTC m=+1.463036830 container remove 37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:35:10 np0005592158 systemd[1]: libpod-conmon-37d098f5b84bb883da955c4240132035fe6efffb5e5bc493d4597e97b08b8ba5.scope: Deactivated successfully.
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:35:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:11 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:12 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:13 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:13 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:14 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:14.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:15.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:15 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:16.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:17 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:17 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:35:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:35:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:35:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:35:18 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:18.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:19 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:20 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:20.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:22 np0005592158 podman[236662]: 2026-01-22 14:35:22.109011724 +0000 UTC m=+0.093732336 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 09:35:22 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:22.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:24 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:24.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:25 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:26 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:26.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:27 np0005592158 ceph-mon[81715]: Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:27 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:28 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:29 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:30 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:31 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1902 writes, 9828 keys, 1902 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s#012Interval WAL: 1903 writes, 1903 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     61.1      1.05              0.21        36    0.029       0      0       0.0       0.0#012  L6      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    125.7    106.7      2.93              0.88        35    0.084    271K    19K       0.0       0.0#012 Sum      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     92.5     94.6      3.99              1.08        71    0.056    271K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     80.1     80.4      0.95              0.21        14    0.068     72K   3610       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    125.7    106.7      2.93              0.88        35    0.084    271K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     61.2      1.05              0.21        35    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.10 MB/s write, 0.36 GB read, 0.10 MB/s read, 4.0 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 40.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000269 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2162,39.07 MB,12.8518%) FilterBlock(71,759.30 KB,0.243915%) IndexBlock(71,1.03 MB,0.340045%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:35:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:32 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:33 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:33 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:34.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:35 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:36 np0005592158 podman[236688]: 2026-01-22 14:35:36.054974329 +0000 UTC m=+0.042031393 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:35:36 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:37 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:38 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:39 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:40 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:40.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:41.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:41 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:43 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:43 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:43.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:44 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:45 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:45.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:46 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:47 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.473 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.474 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:35:47.474 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:35:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:48 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:48 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:48.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:49 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:50 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:50.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:51.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:51 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:52 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:53.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:53 np0005592158 podman[236707]: 2026-01-22 14:35:53.105806203 +0000 UTC m=+0.088424922 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:35:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:53 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:53 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:54 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:55 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:56 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:57 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:58 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:58 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:59.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:35:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:59 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:00 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:36:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:36:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:01 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.545337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562545419, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1723, "num_deletes": 255, "total_data_size": 3240435, "memory_usage": 3312272, "flush_reason": "Manual Compaction"}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562558456, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2109887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58889, "largest_seqno": 60607, "table_properties": {"data_size": 2103097, "index_size": 3605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17067, "raw_average_key_size": 20, "raw_value_size": 2088285, "raw_average_value_size": 2546, "num_data_blocks": 155, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092457, "oldest_key_time": 1769092457, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 13172 microseconds, and 5856 cpu microseconds.
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558521) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2109887 bytes OK
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.558540) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559526) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559540) EVENT_LOG_v1 {"time_micros": 1769092562559535, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.559555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3232307, prev total WAL file size 3232307, number of live WAL files 2.
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2060KB)], [117(8722KB)]
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562560401, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 11041645, "oldest_snapshot_seqno": -1}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 10539 keys, 10878286 bytes, temperature: kUnknown
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562614760, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 10878286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818855, "index_size": 31991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 285162, "raw_average_key_size": 27, "raw_value_size": 10637780, "raw_average_value_size": 1009, "num_data_blocks": 1202, "num_entries": 10539, "num_filter_entries": 10539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.615005) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10878286 bytes
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.616304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 199.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 11070, records dropped: 531 output_compression: NoCompression
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.616319) EVENT_LOG_v1 {"time_micros": 1769092562616311, "job": 74, "event": "compaction_finished", "compaction_time_micros": 54436, "compaction_time_cpu_micros": 25084, "output_level": 6, "num_output_files": 1, "total_output_size": 10878286, "num_input_records": 11070, "num_output_records": 10539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562616831, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562618235, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.560221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:02.618332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:03.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:03 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:03.044 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:03 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:03.045 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:03 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:03 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:04 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:05.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:05 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:06 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:07 np0005592158 podman[236730]: 2026-01-22 14:36:07.077374887 +0000 UTC m=+0.058083695 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:36:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:07 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:08 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:08 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:09.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:09 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:10 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:11 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:12 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:12.047 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:12 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:13.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:13 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:14 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:14 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:15 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:16 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:18 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:36:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:36:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:36:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:19 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:36:20 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:21.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:21 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:23 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:23 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:23.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:24 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592158 podman[236881]: 2026-01-22 14:36:24.087786277 +0000 UTC m=+0.082547435 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:36:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:25 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:26 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:27.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:27 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:28 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:28 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:29.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:29.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:30 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:31 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:32 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:33 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:34 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:34 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:35.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:35 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:36 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:37 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:38 np0005592158 podman[236957]: 2026-01-22 14:36:38.075221159 +0000 UTC m=+0.062730030 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 09:36:38 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:39 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:40 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:41 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:36:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:36:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:43.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:43 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:43 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:44 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:45.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:45 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:46 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:47.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:47 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:47.475 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:36:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:48 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:48.699 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:48 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:48.700 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:49.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:51 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:36:51.702 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:53 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:55.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:36:55 np0005592158 podman[236976]: 2026-01-22 14:36:55.108511523 +0000 UTC m=+0.087650491 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 09:36:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:36:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:36:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.351032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618351070, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 966, "num_deletes": 251, "total_data_size": 1655403, "memory_usage": 1681088, "flush_reason": "Manual Compaction"}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618358248, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 1077454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60612, "largest_seqno": 61573, "table_properties": {"data_size": 1073224, "index_size": 1818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10654, "raw_average_key_size": 20, "raw_value_size": 1064219, "raw_average_value_size": 2027, "num_data_blocks": 79, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092563, "oldest_key_time": 1769092563, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7279 microseconds, and 3502 cpu microseconds.
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.358302) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 1077454 bytes OK
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.358326) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360294) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360335) EVENT_LOG_v1 {"time_micros": 1769092618360327, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.360357) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1650442, prev total WAL file size 1650442, number of live WAL files 2.
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(1052KB)], [120(10MB)]
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618361220, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 11955740, "oldest_snapshot_seqno": -1}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 10549 keys, 10380562 bytes, temperature: kUnknown
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618416370, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 10380562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10321358, "index_size": 31700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 286354, "raw_average_key_size": 27, "raw_value_size": 10140404, "raw_average_value_size": 961, "num_data_blocks": 1185, "num_entries": 10549, "num_filter_entries": 10549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.416649) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10380562 bytes
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.418517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 187.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(20.7) write-amplify(9.6) OK, records in: 11064, records dropped: 515 output_compression: NoCompression
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.418559) EVENT_LOG_v1 {"time_micros": 1769092618418529, "job": 76, "event": "compaction_finished", "compaction_time_micros": 55242, "compaction_time_cpu_micros": 28673, "output_level": 6, "num_output_files": 1, "total_output_size": 10380562, "num_input_records": 11064, "num_output_records": 10549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618418851, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618420731, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.361057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:36:58.420869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:36:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:36:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:03 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:08 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:09 np0005592158 podman[237002]: 2026-01-22 14:37:09.053484859 +0000 UTC m=+0.049992307 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 09:37:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:09.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:11.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:12 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 09:37:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:13.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 09:37:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:18 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:18 np0005592158 ceph-mon[81715]: Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:19 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:21 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:22 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:23.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:23 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:23 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:24 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:25 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:25 np0005592158 podman[237045]: 2026-01-22 14:37:25.605177155 +0000 UTC m=+0.076705137 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 22 09:37:26 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:27 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:37:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:37:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:27.752 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:37:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:27.752 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:37:28 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:28 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:29.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:29 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:31 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:31.755 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:37:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:32 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:33 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:35.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:36 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:37 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:37.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:38 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:39.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:39 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:40 np0005592158 podman[237349]: 2026-01-22 14:37:40.059442848 +0000 UTC m=+0.052456693 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:37:40 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:41 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:41.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:42 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:43 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:43 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:44 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:45.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:45 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:46 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.476 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.476 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:37:47.477 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:37:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:47 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:49 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:50 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:51.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:51 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:52 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:53.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:53 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:54 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:37:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:55.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:37:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:55.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:55 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:56 np0005592158 podman[237369]: 2026-01-22 14:37:56.106756692 +0000 UTC m=+0.098027461 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:37:56 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:37:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:37:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:58 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:58 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:59 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:37:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:59.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:00 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:01 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:38:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:38:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:38:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:38:02 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:03 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:03 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:04 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:05 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:06 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:07 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:08 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:08 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:09 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:11 np0005592158 podman[237396]: 2026-01-22 14:38:11.103987988 +0000 UTC m=+0.060416517 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:38:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:11 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:13.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:13.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:13 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:14 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:15 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:16 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:17.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:17 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:18 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:19.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:19 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:21 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:21.329 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:38:21 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:21.331 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:38:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:21 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:22 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:23.333 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:38:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:23.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:23 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:24 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:25 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:26 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592158 podman[237413]: 2026-01-22 14:38:27.082531269 +0000 UTC m=+0.078520215 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 22 09:38:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:38:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:38:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:27 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:28 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:29.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:31 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:32 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:33 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:33 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:34 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:35 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:36 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:37 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:38:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:38.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:38:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:38 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:39.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:39 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:40 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:38:42 np0005592158 podman[237567]: 2026-01-22 14:38:42.069764065 +0000 UTC m=+0.052025033 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:38:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:42.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:42 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:43 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:43 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:44 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:45 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:46 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.477 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:38:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:49 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:50 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:51 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:52 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:38:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:38:53 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:54 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:55 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:56.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:56 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:38:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:38:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:57 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:58 np0005592158 podman[237637]: 2026-01-22 14:38:58.098185129 +0000 UTC m=+0.088803581 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 09:38:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:59 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:38:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:00 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:01 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:02 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:03 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:03 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:04 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:05 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:06 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:07 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:08 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:08 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:08.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:09 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:10 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:11 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:12 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:13 np0005592158 podman[237664]: 2026-01-22 14:39:13.094623574 +0000 UTC m=+0.074055266 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:39:13 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:13 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:13.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:14 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:15.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:15 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:16 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:17 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:39:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:18 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:19 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:21.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:22 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:23.128 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:39:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:23.129 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:39:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:23.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:23 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:23 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:24 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:25.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:25 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:26 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:27.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:27 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:28 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:28 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:29 np0005592158 podman[237683]: 2026-01-22 14:39:29.107710366 +0000 UTC m=+0.097833157 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 09:39:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:29.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:29 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:30.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:31 np0005592158 ceph-mon[81715]: 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:39:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:32.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:32 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:32 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:33.132 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:39:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:33 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:35 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:35.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:36 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:37 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:38 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:38.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:39 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:40 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:40.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:41 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:41.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:42 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:43 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:43 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:44 np0005592158 podman[237710]: 2026-01-22 14:39:44.085324512 +0000 UTC m=+0.062426222 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:39:44 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.271909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785271957, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2424, "num_deletes": 251, "total_data_size": 4875919, "memory_usage": 4950208, "flush_reason": "Manual Compaction"}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785287910, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 3171350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61579, "largest_seqno": 63997, "table_properties": {"data_size": 3162263, "index_size": 5325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22746, "raw_average_key_size": 21, "raw_value_size": 3142323, "raw_average_value_size": 2939, "num_data_blocks": 228, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092619, "oldest_key_time": 1769092619, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 16026 microseconds, and 6016 cpu microseconds.
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.287946) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 3171350 bytes OK
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.287961) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289578) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289593) EVENT_LOG_v1 {"time_micros": 1769092785289589, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.289611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 4864928, prev total WAL file size 4864928, number of live WAL files 2.
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.291068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(3097KB)], [123(10137KB)]
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785291138, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13551912, "oldest_snapshot_seqno": -1}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 11099 keys, 11911206 bytes, temperature: kUnknown
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785347583, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11911206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11847570, "index_size": 34788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 299422, "raw_average_key_size": 26, "raw_value_size": 11655881, "raw_average_value_size": 1050, "num_data_blocks": 1311, "num_entries": 11099, "num_filter_entries": 11099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.347890) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11911206 bytes
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.349604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.6 rd, 210.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11618, records dropped: 519 output_compression: NoCompression
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.349621) EVENT_LOG_v1 {"time_micros": 1769092785349613, "job": 78, "event": "compaction_finished", "compaction_time_micros": 56557, "compaction_time_cpu_micros": 28410, "output_level": 6, "num_output_files": 1, "total_output_size": 11911206, "num_input_records": 11618, "num_output_records": 11099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785350281, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 22 09:39:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785352498, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.290982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:39:45.352549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:46 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:39:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:39:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:39:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:39:49 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:50.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:51 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:52 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:53 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:53 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:54.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:54 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:39:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:55.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:39:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:56.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:56 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:56 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:57 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:58.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:58 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:58 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:39:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:59 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592158 podman[237911]: 2026-01-22 14:40:00.124509417 +0000 UTC m=+0.114366702 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 22 09:40:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:00.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:00 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:01 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:02 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:03 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:03 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:04 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:05 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:07 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 22 09:40:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:08 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:08 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:09 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:10 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:12 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 22 09:40:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:13 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:13 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:14 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:14 np0005592158 podman[237939]: 2026-01-22 14:40:14.551642168 +0000 UTC m=+0.079000068 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:40:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:15 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:17 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:18 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:18 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:19 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:21 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:22 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:23.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:24 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:25 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:25.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:26 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:26.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:26.832 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:40:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:26.834 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:40:27 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:27.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:28 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:28 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:28.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:29 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:29.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:30.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:31 np0005592158 podman[237959]: 2026-01-22 14:40:31.089174912 +0000 UTC m=+0.075933066 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:40:31 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:32 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:33 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:33 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:34 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:35 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:35 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:35.837 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:40:36 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:36.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:37.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.645350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837645414, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 967, "num_deletes": 256, "total_data_size": 1535158, "memory_usage": 1553008, "flush_reason": "Manual Compaction"}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837657010, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 1008549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64002, "largest_seqno": 64964, "table_properties": {"data_size": 1004293, "index_size": 1779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10872, "raw_average_key_size": 20, "raw_value_size": 995023, "raw_average_value_size": 1842, "num_data_blocks": 77, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092786, "oldest_key_time": 1769092786, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 11697 microseconds, and 2979 cpu microseconds.
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.657055) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 1008549 bytes OK
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.657073) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658398) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658417) EVENT_LOG_v1 {"time_micros": 1769092837658412, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1530149, prev total WAL file size 1530149, number of live WAL files 2.
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(984KB)], [126(11MB)]
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837659044, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12919755, "oldest_snapshot_seqno": -1}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 11110 keys, 12767119 bytes, temperature: kUnknown
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837722711, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 12767119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12702384, "index_size": 35886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 301007, "raw_average_key_size": 27, "raw_value_size": 12509374, "raw_average_value_size": 1125, "num_data_blocks": 1353, "num_entries": 11110, "num_filter_entries": 11110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.722981) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 12767119 bytes
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.724634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 200.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 11639, records dropped: 529 output_compression: NoCompression
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.724651) EVENT_LOG_v1 {"time_micros": 1769092837724643, "job": 80, "event": "compaction_finished", "compaction_time_micros": 63770, "compaction_time_cpu_micros": 27598, "output_level": 6, "num_output_files": 1, "total_output_size": 12767119, "num_input_records": 11639, "num_output_records": 11110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837724904, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837726928, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.658947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:40:37.727053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:39 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:41 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:42.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 22 09:40:43 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:43 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:44 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:45 np0005592158 podman[237986]: 2026-01-22 14:40:45.088498393 +0000 UTC m=+0.065216857 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:40:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:45 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:46 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 22 09:40:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.478 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.479 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:40:47.479 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:40:47 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:48 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:48 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:49 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:50 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:51.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:51 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 22 09:40:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:53 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:53 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:54 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:54 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:55 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:57 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:57.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:40:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:40:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:40:58 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:58 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:59 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:40:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:59.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:00 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:01 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:01.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:02 np0005592158 podman[238137]: 2026-01-22 14:41:02.134035139 +0000 UTC m=+0.119516449 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 09:41:02 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:03 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:03 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:03.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:05 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:06 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:41:06 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:07.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:07 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:08 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:08 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:09 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:10 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:11.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:11 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:12 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:12 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:13.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:13 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:14 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:15.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:15 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:16 np0005592158 podman[238214]: 2026-01-22 14:41:16.06465035 +0000 UTC m=+0.051636040 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 09:41:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:16.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:16 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.663832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877663882, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 807, "num_deletes": 251, "total_data_size": 1239692, "memory_usage": 1257640, "flush_reason": "Manual Compaction"}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877672162, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 597682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64969, "largest_seqno": 65771, "table_properties": {"data_size": 594250, "index_size": 1147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10006, "raw_average_key_size": 21, "raw_value_size": 586609, "raw_average_value_size": 1264, "num_data_blocks": 49, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092838, "oldest_key_time": 1769092838, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 8383 microseconds, and 4687 cpu microseconds.
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.672216) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 597682 bytes OK
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.672238) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673795) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673823) EVENT_LOG_v1 {"time_micros": 1769092877673815, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673844) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1235340, prev total WAL file size 1235340, number of live WAL files 2.
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.674884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373537' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(583KB)], [129(12MB)]
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877674986, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13364801, "oldest_snapshot_seqno": -1}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 11069 keys, 9679072 bytes, temperature: kUnknown
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877737725, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 9679072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9618895, "index_size": 31392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 300663, "raw_average_key_size": 27, "raw_value_size": 9430810, "raw_average_value_size": 852, "num_data_blocks": 1165, "num_entries": 11069, "num_filter_entries": 11069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.737970) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9679072 bytes
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.8 rd, 154.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(38.6) write-amplify(16.2) OK, records in: 11574, records dropped: 505 output_compression: NoCompression
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.739281) EVENT_LOG_v1 {"time_micros": 1769092877739273, "job": 82, "event": "compaction_finished", "compaction_time_micros": 62810, "compaction_time_cpu_micros": 25045, "output_level": 6, "num_output_files": 1, "total_output_size": 9679072, "num_input_records": 11574, "num_output_records": 11069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877739484, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877742014, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.674786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:41:17.742080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:18 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:20 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:21 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:22 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:23 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:23 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:24 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:25 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:26 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:26.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:27 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:28 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:28 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:29 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:29 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:29.628 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:41:29 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:29.629 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:41:30 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:31 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:31.631 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:41:32 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:32.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:33 np0005592158 podman[238235]: 2026-01-22 14:41:33.097437896 +0000 UTC m=+0.082366722 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:41:33 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:33 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:34 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:35 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:35.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:36.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:36 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:37.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:37 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:38 np0005592158 ceph-mon[81715]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:38 np0005592158 ceph-mon[81715]: Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:39 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:39 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:40 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:41.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:41 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:42 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:43 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:44 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:45 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:47 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:47 np0005592158 podman[238261]: 2026-01-22 14:41:47.0556513 +0000 UTC m=+0.051629210 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:41:47.480 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:41:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:48 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:48 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:49 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:49.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:50 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:51 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:52 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:52.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:53 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:53 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:54 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:54.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:55 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:56 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:57 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:58 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:58 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:59 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:41:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:00 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:00.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:01 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:02 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:02.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:03 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:03 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:04 np0005592158 podman[238281]: 2026-01-22 14:42:04.123526375 +0000 UTC m=+0.106644450 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:42:04 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 22 09:42:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:05 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:42:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:42:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:42:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:06 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:07 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:08 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:08 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:09 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:11 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:12 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:12.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:13 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:13 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:14 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:15 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:16 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:17 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:18 np0005592158 podman[238490]: 2026-01-22 14:42:18.060335734 +0000 UTC m=+0.052587136 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 22 09:42:18 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:18 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:19 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:19.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:20 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:20.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:21 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:21.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 22 09:42:22 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:23 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:23 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:24 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:25 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:25.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:26 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:27.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:27 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 22 09:42:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.575691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948575761, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1222, "num_deletes": 252, "total_data_size": 2043289, "memory_usage": 2076368, "flush_reason": "Manual Compaction"}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948639748, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1341494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65776, "largest_seqno": 66993, "table_properties": {"data_size": 1336559, "index_size": 2266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13171, "raw_average_key_size": 20, "raw_value_size": 1325558, "raw_average_value_size": 2097, "num_data_blocks": 98, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092877, "oldest_key_time": 1769092877, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 64308 microseconds, and 4130 cpu microseconds.
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.640007) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1341494 bytes OK
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.640120) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641506) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641530) EVENT_LOG_v1 {"time_micros": 1769092948641521, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.641551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2037243, prev total WAL file size 2037243, number of live WAL files 2.
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.643285) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1310KB)], [132(9452KB)]
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948643356, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 11020566, "oldest_snapshot_seqno": -1}
Jan 22 09:42:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:28.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 11180 keys, 9369009 bytes, temperature: kUnknown
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948715810, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 9369009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9308578, "index_size": 31390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 304170, "raw_average_key_size": 27, "raw_value_size": 9118918, "raw_average_value_size": 815, "num_data_blocks": 1161, "num_entries": 11180, "num_filter_entries": 11180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.716213) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9369009 bytes
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.717531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 11701, records dropped: 521 output_compression: NoCompression
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.717546) EVENT_LOG_v1 {"time_micros": 1769092948717539, "job": 84, "event": "compaction_finished", "compaction_time_micros": 72511, "compaction_time_cpu_micros": 45970, "output_level": 6, "num_output_files": 1, "total_output_size": 9369009, "num_input_records": 11701, "num_output_records": 11180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948717873, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948719339, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.643209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:42:28.719370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:29.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:30 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:30 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:30.495 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:42:30 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:30.496 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:42:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:30.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:31 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:31 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:31.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:32 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:32.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:33 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:33 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:34 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:35 np0005592158 podman[238510]: 2026-01-22 14:42:35.076076604 +0000 UTC m=+0.069162994 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 09:42:35 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:36 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:36.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:37 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:38 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:38 np0005592158 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:38.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:39 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:39.498 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:42:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:39.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:39 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:40.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:41 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:42.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:42 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:43 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:43 np0005592158 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:44.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:44 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:45 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:46 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:42:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:42:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:47 np0005592158 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:47 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:48 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:49 np0005592158 podman[238536]: 2026-01-22 14:42:49.083630702 +0000 UTC m=+0.069938086 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:42:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:49 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:50.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:50 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:51 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:52 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592158 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:53.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:53 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:54 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:55.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:55 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:56.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:56 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:57 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:57 np0005592158 ceph-mon[81715]: Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:58.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:58 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:42:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:00.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:00 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:01.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:02.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:03 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:03 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:04 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:04.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:05 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:06 np0005592158 podman[238555]: 2026-01-22 14:43:06.126123479 +0000 UTC m=+0.114474793 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:43:06 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:06.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:07 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:08 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:08 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:08.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:09 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:10.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:10 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:11 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:13 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:13 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:43:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:43:14 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:14.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:15 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:16 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:17 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:18 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:18 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:20 np0005592158 podman[238712]: 2026-01-22 14:43:20.048498636 +0000 UTC m=+0.045969326 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:43:20 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:21 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:22 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:22.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:23.040 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:43:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:23.040 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:43:23 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:23 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:24 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:25 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:26 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:27.042 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:43:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:27 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:30 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:31 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:32 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:33 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:33 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:34 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:35 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:36 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:37 np0005592158 podman[238783]: 2026-01-22 14:43:37.09157283 +0000 UTC m=+0.087864772 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:43:37 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:38 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:38 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:39 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:39 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:42 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:43 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:43 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:44 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:45 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:46 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:47 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.481 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:43:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:43:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:48 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:48 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:49 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:50 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:51 np0005592158 podman[238809]: 2026-01-22 14:43:51.093028248 +0000 UTC m=+0.075426654 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:43:51 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:52 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:53 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:53 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:54 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:55 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:56 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:57 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:58 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:58 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:43:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:00 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:01.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:01 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:03 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:03 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:04 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:05 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:05.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:06 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:07.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:07 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:08 np0005592158 podman[238828]: 2026-01-22 14:44:08.098310538 +0000 UTC m=+0.085866128 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:44:08 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:08 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:09.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:10 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:11 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:13.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:13.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:13 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:13 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:14 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:14.901 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:14 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:14.902 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:15.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:15 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:17 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:17.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:44:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:44:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:19 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:19.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:20 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:21 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:22 np0005592158 podman[238856]: 2026-01-22 14:44:22.055481454 +0000 UTC m=+0.050845948 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 09:44:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:22 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:22.904 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:44:22 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:24 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:24 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:25 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:44:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:27 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:28 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:29.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:29 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:30 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:31 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:32 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:33 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.5 total, 600.0 interval#012Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3637 syncs, 3.29 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1012 writes, 2061 keys, 1012 commit groups, 1.0 writes per commit group, ingest: 0.95 MB, 0.00 MB/s#012Interval WAL: 1012 writes, 474 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:44:34 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:35.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:35 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:38 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:38 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:39 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:39 np0005592158 podman[239056]: 2026-01-22 14:44:39.119804022 +0000 UTC m=+0.105813748 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 09:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:40 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:41.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:41 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:42 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:43 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:43 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:44 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:45 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:46 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:47.482 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:44:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:48 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:48 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:49 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:50 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:44:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:44:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:44:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:51 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:51.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:52 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:53 np0005592158 podman[239083]: 2026-01-22 14:44:53.074168995 +0000 UTC m=+0.065191457 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:53 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:53 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:53.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:54 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:55 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:56 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:56.424 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:56 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:44:56.424 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:56 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:57 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:58 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:58 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:00 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:01 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:02 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:04 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:04.426 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:45:05 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:05.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:06 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:07.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.916111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107916153, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 257, "total_data_size": 4497747, "memory_usage": 4561632, "flush_reason": "Manual Compaction"}
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107933612, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 2933109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66998, "largest_seqno": 69297, "table_properties": {"data_size": 2924499, "index_size": 4911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21758, "raw_average_key_size": 21, "raw_value_size": 2905497, "raw_average_value_size": 2807, "num_data_blocks": 213, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092948, "oldest_key_time": 1769092948, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 17586 microseconds, and 6790 cpu microseconds.
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.933699) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 2933109 bytes OK
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.933719) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934925) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934938) EVENT_LOG_v1 {"time_micros": 1769093107934933, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.934954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4487194, prev total WAL file size 4487194, number of live WAL files 2.
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.935990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(2864KB)], [135(9149KB)]
Jan 22 09:45:07 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107936040, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 12302118, "oldest_snapshot_seqno": -1}
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 11688 keys, 12155432 bytes, temperature: kUnknown
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108003775, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12155432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089398, "index_size": 35713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 316715, "raw_average_key_size": 27, "raw_value_size": 11888466, "raw_average_value_size": 1017, "num_data_blocks": 1339, "num_entries": 11688, "num_filter_entries": 11688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.004054) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12155432 bytes
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.005276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 179.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 12215, records dropped: 527 output_compression: NoCompression
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.005295) EVENT_LOG_v1 {"time_micros": 1769093108005286, "job": 86, "event": "compaction_finished", "compaction_time_micros": 67823, "compaction_time_cpu_micros": 27907, "output_level": 6, "num_output_files": 1, "total_output_size": 12155432, "num_input_records": 12215, "num_output_records": 11688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108005944, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108007597, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:07.935943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:08.007714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:08 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:09 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:10 np0005592158 podman[239105]: 2026-01-22 14:45:10.160935131 +0000 UTC m=+0.142583714 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:45:10 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:11 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:12 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:13 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592158 ceph-mon[81715]: Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:13 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:14 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:15 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.713867) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118713923, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 251, "total_data_size": 292936, "memory_usage": 300440, "flush_reason": "Manual Compaction"}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118717189, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 192034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69302, "largest_seqno": 69686, "table_properties": {"data_size": 189789, "index_size": 344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5891, "raw_average_key_size": 18, "raw_value_size": 185289, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093108, "oldest_key_time": 1769093108, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3358 microseconds, and 1214 cpu microseconds.
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717229) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 192034 bytes OK
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717247) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718554) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718571) EVENT_LOG_v1 {"time_micros": 1769093118718565, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718586) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 290358, prev total WAL file size 290358, number of live WAL files 2.
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(187KB)], [138(11MB)]
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118719006, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12347466, "oldest_snapshot_seqno": -1}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 11488 keys, 10715333 bytes, temperature: kUnknown
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118798253, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10715333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651715, "index_size": 33809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28741, "raw_key_size": 313328, "raw_average_key_size": 27, "raw_value_size": 10455224, "raw_average_value_size": 910, "num_data_blocks": 1254, "num_entries": 11488, "num_filter_entries": 11488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.798484) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10715333 bytes
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.800131) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.6 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(120.1) write-amplify(55.8) OK, records in: 11999, records dropped: 511 output_compression: NoCompression
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.800147) EVENT_LOG_v1 {"time_micros": 1769093118800140, "job": 88, "event": "compaction_finished", "compaction_time_micros": 79330, "compaction_time_cpu_micros": 35215, "output_level": 6, "num_output_files": 1, "total_output_size": 10715333, "num_input_records": 11999, "num_output_records": 11488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118800376, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118802859, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.718935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.802999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:45:18.803001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:19 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:20 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:21 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:22 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:23 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:23 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:24 np0005592158 podman[239133]: 2026-01-22 14:45:24.049389999 +0000 UTC m=+0.045299288 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 22 09:45:24 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:25 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:26 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:27 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:27.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:28 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:28 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:30 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 12K writes, 69K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1872 writes, 9897 keys, 1872 commit groups, 1.0 writes per commit group, ingest: 16.53 MB, 0.03 MB/s#012Interval WAL: 1872 writes, 1872 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     63.8      1.19              0.24        44    0.027       0      0       0.0       0.0#012  L6      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.2    134.3    115.1      3.45              1.12        43    0.080    364K    23K       0.0       0.0#012 Sum      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.2     99.7    101.9      4.64              1.36        87    0.053    364K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    143.6    146.2      0.65              0.28        16    0.041     92K   4158       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    134.3    115.1      3.45              1.12        43    0.080    364K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     63.9      1.19              0.24        43    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.074, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.46 GB write, 0.11 MB/s write, 0.45 GB read, 0.11 MB/s read, 4.6 seconds#012Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 50.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000213 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2671,48.27 MB,15.8792%) FilterBlock(87,1018.30 KB,0.327115%) IndexBlock(87,1.34 MB,0.440181%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:31.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:32 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:33 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:33 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:33.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:34 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:34 np0005592158 podman[239324]: 2026-01-22 14:45:34.482792088 +0000 UTC m=+0.079295709 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:45:34 np0005592158 podman[239324]: 2026-01-22 14:45:34.566521406 +0000 UTC m=+0.163025017 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:45:35 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:35.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:36 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:45:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:45:37 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:38 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:38 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:39 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:39.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:40 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592158 podman[239579]: 2026-01-22 14:45:41.118434331 +0000 UTC m=+0.097459482 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 09:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:41 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:41.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:42 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:43 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:43 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:43.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:44 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:45 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:45.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:46 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:47 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.483 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:45:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:47.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:48 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:48 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:49 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:50 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:51 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:52 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:53 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:53 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:54 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:55 np0005592158 podman[239658]: 2026-01-22 14:45:55.064415068 +0000 UTC m=+0.049872461 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:45:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:55 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:55.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:56 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:57 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:57.704 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:45:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:45:57.705 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:45:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:58 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:58 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:59 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:45:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:59.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:46:00.707 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:46:00 np0005592158 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:01 np0005592158 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592158 ceph-mon[81715]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:02 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:03 np0005592158 ceph-mon[81715]: Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:03 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:04 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:05.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:05 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:07 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:08 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:08 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:09 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:10 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:11 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 09:46:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:11.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 09:46:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:12 np0005592158 podman[239678]: 2026-01-22 14:46:12.096975976 +0000 UTC m=+0.085104836 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:46:12 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:13 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:13 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:14 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:16 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:16 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 09:46:17 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:18 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:18 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:18 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 09:46:19 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:20 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:21.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:21 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:22 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:23.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:23 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:23 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:24 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:25 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:26 np0005592158 podman[239704]: 2026-01-22 14:46:26.088369954 +0000 UTC m=+0.071996982 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:46:26 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:27 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:28 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:28 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:29 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:30 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:31.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:31 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:32 np0005592158 ceph-mon[81715]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:33 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:33 np0005592158 ceph-mon[81715]: Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:34 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:35 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:36 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:37 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:38 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:38 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:39 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:40 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:41 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:42 np0005592158 podman[239750]: 2026-01-22 14:46:42.613201047 +0000 UTC m=+0.082247510 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:46:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:42 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:43.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:43 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:46:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:46:44 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:45 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:46 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:46:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:46:47.484 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:46:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:47 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:50 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:51 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:52 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:53 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:53 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:53.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:54 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:55 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:56 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592158 podman[239932]: 2026-01-22 14:46:57.068966723 +0000 UTC m=+0.056783689 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 09:46:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:57 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:58 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:58 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:46:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:59 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:00 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:01 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:01.874 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:47:01 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:01.875 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:47:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:02 np0005592158 ceph-mon[81715]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:03 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:03 np0005592158 ceph-mon[81715]: Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:04 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:05 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:06 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:06 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:06.878 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:47:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:07 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:08 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:08 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:09 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:10 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:11 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:12 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592158 podman[239951]: 2026-01-22 14:47:13.130576266 +0000 UTC m=+0.121060990 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 09:47:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:13 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:13 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:14 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:16 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:47:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:47:19 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:19.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:19.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:20 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:21.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:21.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:22 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:23 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:23 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:24 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:25 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:27 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:27.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:28 np0005592158 podman[239981]: 2026-01-22 14:47:28.101941503 +0000 UTC m=+0.086833354 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:47:28 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:29 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:30.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:30 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:31 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:32 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:33 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:33 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:34.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:34 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:35.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:35 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:36.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:36 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:37.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:37 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:38.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:39 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:39 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:40.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:40 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:41 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:42.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:42 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:43 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:43 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:44.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:44 np0005592158 podman[239999]: 2026-01-22 14:47:44.107698193 +0000 UTC m=+0.104544943 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:47:44 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:45.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:45 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:46 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:47:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:47:47.486 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:47:47 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:48 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:48 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:49.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:49 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:50 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:47:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:52.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:52 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:53 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:53 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:55 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:56 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:57 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:58 np0005592158 podman[240299]: 2026-01-22 14:47:58.336653856 +0000 UTC m=+0.082477245 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 09:47:58 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:58 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:47:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:59 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.878590) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280878637, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2436, "num_deletes": 251, "total_data_size": 4754012, "memory_usage": 4825040, "flush_reason": "Manual Compaction"}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280895373, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3081552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69691, "largest_seqno": 72122, "table_properties": {"data_size": 3072496, "index_size": 5229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23031, "raw_average_key_size": 21, "raw_value_size": 3052524, "raw_average_value_size": 2823, "num_data_blocks": 226, "num_entries": 1081, "num_filter_entries": 1081, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093119, "oldest_key_time": 1769093119, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 16838 microseconds, and 8103 cpu microseconds.
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.895431) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3081552 bytes OK
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.895454) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897500) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897516) EVENT_LOG_v1 {"time_micros": 1769093280897511, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 4742925, prev total WAL file size 4742925, number of live WAL files 2.
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.898785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3009KB)], [141(10MB)]
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280898877, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 13796885, "oldest_snapshot_seqno": -1}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 12052 keys, 12161721 bytes, temperature: kUnknown
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280965485, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 12161721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093661, "index_size": 36843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 326789, "raw_average_key_size": 27, "raw_value_size": 11886395, "raw_average_value_size": 986, "num_data_blocks": 1377, "num_entries": 12052, "num_filter_entries": 12052, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.965961) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12161721 bytes
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.967371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.8 rd, 182.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 12569, records dropped: 517 output_compression: NoCompression
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.967410) EVENT_LOG_v1 {"time_micros": 1769093280967394, "job": 90, "event": "compaction_finished", "compaction_time_micros": 66731, "compaction_time_cpu_micros": 28881, "output_level": 6, "num_output_files": 1, "total_output_size": 12161721, "num_input_records": 12569, "num_output_records": 12052, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280968892, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280972856, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.898703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:48:00.972926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:01.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:01 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:02 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:03 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:03.225 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:03 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:03.227 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:03 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:03 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:04.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:04 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:05 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:07 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:08 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:09 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:10 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:11 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:11.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:12 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:13 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:13.230 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:13 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:13 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:13.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:14 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:15 np0005592158 podman[240342]: 2026-01-22 14:48:15.116434866 +0000 UTC m=+0.103951757 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 09:48:15 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:16.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:16 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:17 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:18.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:19 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:20.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:20 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:21 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:22 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:22 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:24.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:24 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:24 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:25.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:25 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:26.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:26 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:27 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:28 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:29 np0005592158 podman[240368]: 2026-01-22 14:48:29.059397532 +0000 UTC m=+0.049329707 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:48:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:29 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:30.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:30 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:31 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:32.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:32 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:34.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:34 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:34 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:35 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:36.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:36 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:37 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:38.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:38 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:38 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:40.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:40 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:41 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:42 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:42.251 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:42 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:42.252 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:43 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:43.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:44.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:44 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:44 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:45 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:46 np0005592158 podman[240387]: 2026-01-22 14:48:46.126452576 +0000 UTC m=+0.113122446 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:48:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:46.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:46 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:48:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:48:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:48 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:48.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:49 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:49 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:50 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:51 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:48:51.254 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:51 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:52 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:53 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:53 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:54 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:55.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:55 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:56 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:57 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:59 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:59 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:48:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:00 np0005592158 podman[240545]: 2026-01-22 14:49:00.06569068 +0000 UTC m=+0.056095860 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:00.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:01 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:49:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:03 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:03 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:04.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:04 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:05 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:06 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:07 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:08 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:08 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:09 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:11.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:11 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:12 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:13.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:13 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:14 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:15.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:15 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:16 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:17 np0005592158 podman[240615]: 2026-01-22 14:49:17.129424392 +0000 UTC m=+0.111493382 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 09:49:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:17 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:18 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:18 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:19 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:20.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:21 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:21.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:22 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:22.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:23.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:24 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:24 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:24.414 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:49:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:24.415 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:49:25 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:25.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:26 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:27 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:28 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:28 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:29 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:30 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:31 np0005592158 podman[240642]: 2026-01-22 14:49:31.097704876 +0000 UTC m=+0.083283549 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:49:31 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:32 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:32 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:32.417 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:49:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:33 np0005592158 ceph-mon[81715]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:33 np0005592158 ceph-mon[81715]: Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:33.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:34 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:35 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:35.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:36 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:37 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:37.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.166177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378166709, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1529, "num_deletes": 258, "total_data_size": 2843266, "memory_usage": 2876080, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378178691, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1857518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72127, "largest_seqno": 73651, "table_properties": {"data_size": 1851434, "index_size": 3094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15427, "raw_average_key_size": 20, "raw_value_size": 1838085, "raw_average_value_size": 2460, "num_data_blocks": 134, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093281, "oldest_key_time": 1769093281, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 12586 microseconds, and 5647 cpu microseconds.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.178774) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1857518 bytes OK
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.178825) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180475) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180496) EVENT_LOG_v1 {"time_micros": 1769093378180490, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.180516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2835929, prev total WAL file size 2844673, number of live WAL files 2.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181442) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1813KB)], [144(11MB)]
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378181516, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14019239, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 12268 keys, 13865239 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378255500, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 13865239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13794154, "index_size": 39292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 332861, "raw_average_key_size": 27, "raw_value_size": 13581451, "raw_average_value_size": 1107, "num_data_blocks": 1477, "num_entries": 12268, "num_filter_entries": 12268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.255800) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 13865239 bytes
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.257252) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 187.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 12799, records dropped: 531 output_compression: NoCompression
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.257268) EVENT_LOG_v1 {"time_micros": 1769093378257260, "job": 92, "event": "compaction_finished", "compaction_time_micros": 74053, "compaction_time_cpu_micros": 43385, "output_level": 6, "num_output_files": 1, "total_output_size": 13865239, "num_input_records": 12799, "num_output_records": 12268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378257691, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378259563, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.259638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.260044) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378260129, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28880, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378262247, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73653, "largest_seqno": 73907, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 2222 microseconds, and 848 cpu microseconds.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.262278) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 13847 bytes OK
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.262299) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263376) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263388) EVENT_LOG_v1 {"time_micros": 1769093378263384, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(13KB)], [147(13MB)]
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378263737, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 13879086, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 12019 keys, 10006926 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378313149, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 10006926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9942432, "index_size": 33341, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 327879, "raw_average_key_size": 27, "raw_value_size": 9738944, "raw_average_value_size": 810, "num_data_blocks": 1228, "num_entries": 12019, "num_filter_entries": 12019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.313423) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10006926 bytes
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.3 rd, 202.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(1725.0) write-amplify(722.7) OK, records in: 12523, records dropped: 504 output_compression: NoCompression
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314561) EVENT_LOG_v1 {"time_micros": 1769093378314549, "job": 94, "event": "compaction_finished", "compaction_time_micros": 49511, "compaction_time_cpu_micros": 24823, "output_level": 6, "num_output_files": 1, "total_output_size": 10006926, "num_input_records": 12523, "num_output_records": 12019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378314726, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378317643, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.263604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:38 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:39 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:40 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:41 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:42.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:42 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:43.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:43 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:44 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:45 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:46 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.487 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:49:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:49:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:49:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:47.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:47 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592158 podman[240663]: 2026-01-22 14:49:48.216680594 +0000 UTC m=+0.196229286 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:49:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:48.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:48 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:49.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:49 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:49 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:50 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:51.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:51 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:52 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:53 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:53 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:54.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:54 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:55 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:56.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:56 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:58 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:59 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:59 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:49:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:00 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:01 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:02 np0005592158 podman[240690]: 2026-01-22 14:50:02.06249134 +0000 UTC m=+0.053928506 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:50:02 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:03 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:04 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:04 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:04.670 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:04.671 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:05 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:05.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:06 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:07 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:08 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:08 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:08.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:08 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:08.673 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:09 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:09.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:10 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:10.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:11 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:50:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:50:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:12 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:13 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:13 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:14 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:15 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:16 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:17 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:18 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:18 np0005592158 podman[240864]: 2026-01-22 14:50:18.502079154 +0000 UTC m=+0.094466130 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:50:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:19 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:20 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:21 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:22 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:23 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:24 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:25 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:26 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:27.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:27 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:28 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:28 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:29 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:30 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:31.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:31 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:33 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:33 np0005592158 podman[240918]: 2026-01-22 14:50:33.122563298 +0000 UTC m=+0.104032529 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:50:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:34 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:34 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:35 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:36 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:37 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:38 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:39 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:39 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:40 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:41 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:42 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:43 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:43 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:44 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.331352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445331403, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 251, "total_data_size": 1915672, "memory_usage": 1936992, "flush_reason": "Manual Compaction"}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445340077, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 1258029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73912, "largest_seqno": 75060, "table_properties": {"data_size": 1253303, "index_size": 2121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12396, "raw_average_key_size": 20, "raw_value_size": 1242935, "raw_average_value_size": 2068, "num_data_blocks": 92, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8751 microseconds, and 3721 cpu microseconds.
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.340111) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 1258029 bytes OK
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.340128) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341535) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341550) EVENT_LOG_v1 {"time_micros": 1769093445341545, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1909918, prev total WAL file size 1909918, number of live WAL files 2.
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.342186) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1228KB)], [150(9772KB)]
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445342230, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 11264955, "oldest_snapshot_seqno": -1}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 12105 keys, 9648861 bytes, temperature: kUnknown
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445399518, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 9648861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584277, "index_size": 33239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 330813, "raw_average_key_size": 27, "raw_value_size": 9379677, "raw_average_value_size": 774, "num_data_blocks": 1219, "num_entries": 12105, "num_filter_entries": 12105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.399839) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9648861 bytes
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.401014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.3 rd, 168.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 12620, records dropped: 515 output_compression: NoCompression
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.401036) EVENT_LOG_v1 {"time_micros": 1769093445401025, "job": 96, "event": "compaction_finished", "compaction_time_micros": 57373, "compaction_time_cpu_micros": 30946, "output_level": 6, "num_output_files": 1, "total_output_size": 9648861, "num_input_records": 12620, "num_output_records": 12105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445401488, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445403748, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.342125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:50:45.403836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:45.449 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:45 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:45.450 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:46 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:47 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.488 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:50:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:47.489 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:50:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:47.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:48 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:48 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:48.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:49 np0005592158 podman[240937]: 2026-01-22 14:50:49.130691407 +0000 UTC m=+0.120375389 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 09:50:49 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:50 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:51 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:50:51.454 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:51 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:52 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:53 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:53 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:54 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:55.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:55 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:56 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:50:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:50:57 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:58 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:58 np0005592158 ceph-mon[81715]: Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:50:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:59 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:00 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:01 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:02 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:03 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:04 np0005592158 podman[240963]: 2026-01-22 14:51:04.069475702 +0000 UTC m=+0.053139265 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:51:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:04 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:05 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:06 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:07 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:08 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:08 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:09 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:10 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:51:12 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:13 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:14 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:14 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:15 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:16 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:17 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:18 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:18 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:19 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:20 np0005592158 podman[241112]: 2026-01-22 14:51:20.113293074 +0000 UTC m=+0.104190282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:51:20 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:51:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:51:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:20.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:21 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:21.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:22 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:23 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:23 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:23.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:24 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:24.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:25 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:25.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:26 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:27.279 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:51:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:27.280 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:51:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:27.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:27 np0005592158 ceph-mon[81715]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:27 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:28.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:28 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:28 np0005592158 ceph-mon[81715]: Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:29 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:29.282 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:51:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:29.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:29 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:30.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:30 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:31.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:31 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:32 np0005592158 ceph-mon[81715]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:51:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:33.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:33 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:33 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:34.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:34 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:35 np0005592158 podman[241187]: 2026-01-22 14:51:35.051712628 +0000 UTC m=+0.046894485 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:51:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:35.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:35 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:36.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:36 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:38 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:38.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 22 09:51:39 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:39 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:40 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:41 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:42 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:42.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:43 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:44 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:44 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:44.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:45 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:46 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:51:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:51:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:51:47 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:47.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:48 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:48 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:49 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:50 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:51 np0005592158 podman[241206]: 2026-01-22 14:51:51.078187785 +0000 UTC m=+0.069620060 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 09:51:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:52 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:53 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:53.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:54 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:54 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:55 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:56 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:57 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:59 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:51:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:00 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:01.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:02 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:02.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:03 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592158 ceph-mon[81715]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:04.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:04 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:04 np0005592158 ceph-mon[81715]: Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:05.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:06 np0005592158 podman[241234]: 2026-01-22 14:52:06.076638294 +0000 UTC m=+0.061663475 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:52:06 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:06.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:07 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:08.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:08 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:09 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:10 np0005592158 ceph-mon[81715]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:11.063 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:11.064 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:11 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:12 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:12 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 09:52:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 09:52:14 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:15 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:16 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:16.066 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:52:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:16 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:17.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:18 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:18 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:18.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:19 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:19.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:20 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:21.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:21 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:22 np0005592158 podman[241254]: 2026-01-22 14:52:22.139458869 +0000 UTC m=+0.126011761 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 22 09:52:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:22.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:23 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:23.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:24.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:24 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:24 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:26 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:26.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:27 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:28.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:52:28 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:29 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:30.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:30 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:32 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:33 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:34.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:34 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:34 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:35 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:36.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:37 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:37 np0005592158 podman[241410]: 2026-01-22 14:52:37.048330315 +0000 UTC m=+0.045214111 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:52:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:37.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.463235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558463327, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 250, "total_data_size": 3210495, "memory_usage": 3277872, "flush_reason": "Manual Compaction"}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558481960, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2099552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75065, "largest_seqno": 76759, "table_properties": {"data_size": 2092912, "index_size": 3521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16257, "raw_average_key_size": 19, "raw_value_size": 2078219, "raw_average_value_size": 2546, "num_data_blocks": 154, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093446, "oldest_key_time": 1769093446, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 18749 microseconds, and 8858 cpu microseconds.
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.482003) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2099552 bytes OK
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.482023) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483713) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483727) EVENT_LOG_v1 {"time_micros": 1769093558483722, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483745) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3202444, prev total WAL file size 3202444, number of live WAL files 2.
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.484565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2050KB)], [153(9422KB)]
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558484685, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 11748413, "oldest_snapshot_seqno": -1}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 12404 keys, 10657385 bytes, temperature: kUnknown
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558535372, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 10657385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10590391, "index_size": 34881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 339539, "raw_average_key_size": 27, "raw_value_size": 10379673, "raw_average_value_size": 836, "num_data_blocks": 1270, "num_entries": 12404, "num_filter_entries": 12404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.535893) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10657385 bytes
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.537182) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.4 rd, 210.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.7) write-amplify(5.1) OK, records in: 12921, records dropped: 517 output_compression: NoCompression
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.537207) EVENT_LOG_v1 {"time_micros": 1769093558537196, "job": 98, "event": "compaction_finished", "compaction_time_micros": 50761, "compaction_time_cpu_micros": 27007, "output_level": 6, "num_output_files": 1, "total_output_size": 10657385, "num_input_records": 12921, "num_output_records": 12404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558538061, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558539689, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.484469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:52:38.539773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:39 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:39 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:41 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:42 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:43 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:44 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:44 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:45 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:46 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:47 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.490 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:52:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:52:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:48 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:49 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:49 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:50 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:51 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:52 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:52.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:53 np0005592158 podman[241479]: 2026-01-22 14:52:53.104369109 +0000 UTC m=+0.096016653 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:52:53 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:53.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:53 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:53.974 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:53 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:53.976 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:54 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:54 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:54.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:55 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:55.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:56 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:57 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:58 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:58.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:59 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:59 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:52:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:59.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:59 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:52:59.978 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:00 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:01 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:02 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:02.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:03 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:04 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:04 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:05 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:06 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:06.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:07 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:07.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:07 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:08 np0005592158 podman[241505]: 2026-01-22 14:53:08.060177627 +0000 UTC m=+0.050211206 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 09:53:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:08 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:08 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:09 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:09 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:10.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:11 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:12 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:12 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:13 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:14 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:14 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:15 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:16 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:17.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:17 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:18 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:18.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:19 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:20 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:21 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:22 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:22 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:23 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:23 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:24 np0005592158 podman[241528]: 2026-01-22 14:53:24.120242069 +0000 UTC m=+0.101781447 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 22 09:53:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:24 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:26 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:28.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:28 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:28 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:29 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:30 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:31 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:32.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:33 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:33.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:34 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:34 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:35 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:35.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:36 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:37 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.314705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618314745, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1001, "num_deletes": 251, "total_data_size": 1620554, "memory_usage": 1650824, "flush_reason": "Manual Compaction"}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618324534, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1064004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76764, "largest_seqno": 77760, "table_properties": {"data_size": 1059795, "index_size": 1732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10912, "raw_average_key_size": 20, "raw_value_size": 1050737, "raw_average_value_size": 1953, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093559, "oldest_key_time": 1769093559, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 9896 microseconds, and 5381 cpu microseconds.
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.324597) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1064004 bytes OK
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.324629) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326589) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326610) EVENT_LOG_v1 {"time_micros": 1769093618326603, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1615456, prev total WAL file size 1615456, number of live WAL files 2.
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.328079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1039KB)], [156(10MB)]
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618328111, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 11721389, "oldest_snapshot_seqno": -1}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 12431 keys, 10129817 bytes, temperature: kUnknown
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618384534, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10129817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10063191, "index_size": 34449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 341173, "raw_average_key_size": 27, "raw_value_size": 9852380, "raw_average_value_size": 792, "num_data_blocks": 1246, "num_entries": 12431, "num_filter_entries": 12431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.384799) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129817 bytes
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.4 rd, 179.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(20.5) write-amplify(9.5) OK, records in: 12942, records dropped: 511 output_compression: NoCompression
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386102) EVENT_LOG_v1 {"time_micros": 1769093618386094, "job": 100, "event": "compaction_finished", "compaction_time_micros": 56506, "compaction_time_cpu_micros": 32693, "output_level": 6, "num_output_files": 1, "total_output_size": 10129817, "num_input_records": 12942, "num_output_records": 12431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618386589, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618388496, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:53:38.388590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592158 podman[241579]: 2026-01-22 14:53:38.501753135 +0000 UTC m=+0.051282275 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:53:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:39 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:39 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:53:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:53:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:40 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:41 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:42 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:44 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:45 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:45.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:46 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:46.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.491 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:53:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:53:47 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:48 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:48 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:49 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:50 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:50.104 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:53:50 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:50.106 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:53:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:50 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:52 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:53 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:54 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:54 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:55 np0005592158 podman[241756]: 2026-01-22 14:53:55.118897451 +0000 UTC m=+0.110485383 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:53:55 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:56 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:53:56.107 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:56 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:57 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:57 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:58.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:59 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:59 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:53:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:59.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:00 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:01 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:03 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:04 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:05 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:06 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:07.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:09 np0005592158 podman[241783]: 2026-01-22 14:54:09.069120495 +0000 UTC m=+0.054843280 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:54:09 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:09 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:10 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:10.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:11 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:11.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:12 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.560883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653560924, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 694, "num_deletes": 256, "total_data_size": 999112, "memory_usage": 1012248, "flush_reason": "Manual Compaction"}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653567614, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 656855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77765, "largest_seqno": 78454, "table_properties": {"data_size": 653574, "index_size": 1124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8446, "raw_average_key_size": 19, "raw_value_size": 646531, "raw_average_value_size": 1493, "num_data_blocks": 48, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093619, "oldest_key_time": 1769093619, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 6818 microseconds, and 3101 cpu microseconds.
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.567699) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 656855 bytes OK
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.567725) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569164) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569189) EVENT_LOG_v1 {"time_micros": 1769093653569181, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.569216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 995234, prev total WAL file size 995234, number of live WAL files 2.
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570156) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(641KB)], [159(9892KB)]
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653570258, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 10786672, "oldest_snapshot_seqno": -1}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 12340 keys, 10643294 bytes, temperature: kUnknown
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653648870, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 10643294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10576425, "index_size": 34884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30917, "raw_key_size": 340474, "raw_average_key_size": 27, "raw_value_size": 10366369, "raw_average_value_size": 840, "num_data_blocks": 1261, "num_entries": 12340, "num_filter_entries": 12340, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.649281) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10643294 bytes
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.651067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(32.6) write-amplify(16.2) OK, records in: 12864, records dropped: 524 output_compression: NoCompression
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.651088) EVENT_LOG_v1 {"time_micros": 1769093653651078, "job": 102, "event": "compaction_finished", "compaction_time_micros": 78755, "compaction_time_cpu_micros": 31911, "output_level": 6, "num_output_files": 1, "total_output_size": 10643294, "num_input_records": 12864, "num_output_records": 12340, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653651316, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653652854, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:54:13.652905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:13 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:14 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:15 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:16.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:17 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:54:18 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:19 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:19 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:21 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:22 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:23 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:23.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:24 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:24 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:24.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:25 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:26 np0005592158 podman[241802]: 2026-01-22 14:54:26.105613588 +0000 UTC m=+0.091491531 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:54:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:27 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:28 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:28.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:29 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:29 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:29.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:30 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:31 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:31.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:32 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:33 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:34 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:34 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:54:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.5 total, 600.0 interval#012Cumulative writes: 13K writes, 42K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4199 syncs, 3.13 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1191 writes, 2052 keys, 1191 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s#012Interval WAL: 1191 writes, 562 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:54:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:35 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:35.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:36 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:37 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:37.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:38 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:38 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:39 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:40 np0005592158 podman[241828]: 2026-01-22 14:54:40.071800216 +0000 UTC m=+0.060825751 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 09:54:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:40 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:41.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:42 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:43 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:44 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:44 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:45 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:46 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.492 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:54:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:54:47.493 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:54:47 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:47.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:48.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:48 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:48 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:49 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:49.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:50.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:50 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:54:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:54:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:54:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:54:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:52.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:53 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:54 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:54.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:55 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:56 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:56.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:57 np0005592158 podman[241978]: 2026-01-22 14:54:57.097200618 +0000 UTC m=+0.084966375 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:54:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:54:57.185 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:54:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:54:57.186 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:54:57 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:54:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:54:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:59 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:59 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:54:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:01 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:02 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:02.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:03 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:04.188 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:55:04 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:04 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:05 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:06 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:07 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:08 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:09 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:10 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:10.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:11 np0005592158 podman[242055]: 2026-01-22 14:55:11.113152327 +0000 UTC m=+0.096040643 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:55:11 np0005592158 ceph-mon[81715]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:12 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:13 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:13 np0005592158 ceph-mon[81715]: Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:14.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:14 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:15 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:16 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:17 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:18 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:19 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:20 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:21 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:22 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:23 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:24 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:25 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:26 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:28 np0005592158 podman[242075]: 2026-01-22 14:55:28.088464347 +0000 UTC m=+0.072357914 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 22 09:55:28 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:29 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:29 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:30 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:30.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:31 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:55:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 14K writes, 79K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1861 writes, 9620 keys, 1861 commit groups, 1.0 writes per commit group, ingest: 16.43 MB, 0.03 MB/s#012Interval WAL: 1861 writes, 1861 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     67.5      1.27              0.28        51    0.025       0      0       0.0       0.0#012  L6      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.5    140.7    121.2      3.88              1.34        50    0.078    454K    26K       0.0       0.0#012 Sum      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.5    106.0    108.0      5.15              1.62       101    0.051    454K    26K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    163.2    163.1      0.51              0.26        14    0.036     89K   3619       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    140.7    121.2      3.88              1.34        50    0.078    454K    26K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     67.6      1.27              0.28        50    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.084, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.12 MB/s write, 0.53 GB read, 0.11 MB/s read, 5.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 58.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000275 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3064,55.17 MB,18.1467%) FilterBlock(101,1.23 MB,0.403088%) IndexBlock(101,1.63 MB,0.536402%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:55:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:32 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:33 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:34 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:34.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:35 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:37 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:38.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:38 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:39 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:39 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:40.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:40 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:41 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:41 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.939 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:55:41 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.940 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:55:41 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:41.941 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:55:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:42 np0005592158 podman[242101]: 2026-01-22 14:55:42.085463013 +0000 UTC m=+0.075401186 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:55:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:42 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:44 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:45 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:46 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:47 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.493 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.494 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:55:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:55:47.494 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:55:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:48.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:48 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:49 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:49 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:50 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:51 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:52.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:52 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:53 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:54 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:54 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:55 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:56 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:57 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:58.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:58 np0005592158 podman[242196]: 2026-01-22 14:55:58.318861413 +0000 UTC m=+0.143917535 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:55:58 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592158 podman[242322]: 2026-01-22 14:55:58.721944981 +0000 UTC m=+0.065704385 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 22 09:55:58 np0005592158 podman[242322]: 2026-01-22 14:55:58.813210673 +0000 UTC m=+0.156970037 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:55:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:55:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:59 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:59 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:00.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:01 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:56:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:02 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:56:02 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:03 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:04 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:05 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:07 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:07 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:08 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:10 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:11 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:12 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:13 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:13 np0005592158 podman[242626]: 2026-01-22 14:56:13.463434348 +0000 UTC m=+0.087375958 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:56:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:14 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:14 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:15 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:16 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:17 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:18 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:20.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:20.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:23 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:23 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:23 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:24.142 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:56:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:24.144 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:56:24 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:26 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:26.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:27.147 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:56:27 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:28 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:29 np0005592158 podman[242646]: 2026-01-22 14:56:29.132864347 +0000 UTC m=+0.113390470 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 22 09:56:29 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:29 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:31 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.397438) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792397482, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2023, "num_deletes": 251, "total_data_size": 3987062, "memory_usage": 4042832, "flush_reason": "Manual Compaction"}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792414298, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2598388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78459, "largest_seqno": 80477, "table_properties": {"data_size": 2590610, "index_size": 4335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19959, "raw_average_key_size": 21, "raw_value_size": 2573537, "raw_average_value_size": 2749, "num_data_blocks": 186, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093653, "oldest_key_time": 1769093653, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 16902 microseconds, and 7634 cpu microseconds.
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414344) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2598388 bytes OK
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414365) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416019) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416046) EVENT_LOG_v1 {"time_micros": 1769093792416038, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3977642, prev total WAL file size 3977642, number of live WAL files 2.
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2537KB)], [162(10MB)]
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792417867, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13241682, "oldest_snapshot_seqno": -1}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 12759 keys, 11618584 bytes, temperature: kUnknown
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792495470, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11618584, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11548482, "index_size": 37093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 350863, "raw_average_key_size": 27, "raw_value_size": 11330620, "raw_average_value_size": 888, "num_data_blocks": 1348, "num_entries": 12759, "num_filter_entries": 12759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.495767) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11618584 bytes
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.496936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.4 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.2 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.5) OK, records in: 13276, records dropped: 517 output_compression: NoCompression
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.496953) EVENT_LOG_v1 {"time_micros": 1769093792496946, "job": 104, "event": "compaction_finished", "compaction_time_micros": 77703, "compaction_time_cpu_micros": 33165, "output_level": 6, "num_output_files": 1, "total_output_size": 11618584, "num_input_records": 13276, "num_output_records": 12759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792497511, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792499369, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:56:32.499526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:32.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:33 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:34 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:34.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:35 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:36 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:36.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:38 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:39 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:39 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:39 np0005592158 ceph-mon[81715]: Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:40 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:41 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:42.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:42 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:43 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:44 np0005592158 podman[242672]: 2026-01-22 14:56:44.054745884 +0000 UTC m=+0.051027478 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:56:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:45 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:45 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:46 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:46.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:56:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:56:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:56:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:56:47 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:49 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:50 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:50 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:50 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:51 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:53 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:54 np0005592158 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:55 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:55 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:56 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:57 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:58 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:56:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:58.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:00 np0005592158 podman[242693]: 2026-01-22 14:57:00.134945291 +0000 UTC m=+0.131370787 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:57:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:00 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:00 np0005592158 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:00.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:01 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:01 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:02 np0005592158 ceph-mon[81715]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:02.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:03 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:04 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:04 np0005592158 ceph-mon[81715]: Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:04.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:05 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:05 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:07 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:08 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:08.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:09 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:10 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:12 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:57:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:57:13 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:14 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:15 np0005592158 podman[242851]: 2026-01-22 14:57:15.079470059 +0000 UTC m=+0.063124345 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:57:15 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:16 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:17 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:18 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:19 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:19 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:20.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:20 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:21 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:23 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:24 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:24 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:25 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:26 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.362 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:57:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.363 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:57:26 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:26.364 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:57:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:27 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:28 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:28.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:29 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:29 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:30 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:30.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:31 np0005592158 podman[242921]: 2026-01-22 14:57:31.107130906 +0000 UTC m=+0.076968888 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 09:57:31 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:32 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:33 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:34 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:34 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:35 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:36 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:37 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:38 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:39 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:39 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:40 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:41 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:42 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:43 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:45 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:45 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:46 np0005592158 podman[242948]: 2026-01-22 14:57:46.067617855 +0000 UTC m=+0.053917127 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:57:46 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:47 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:57:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.495 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:57:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:57:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:57:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:48.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:48 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:49 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:49 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:50 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:51.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:51 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:52 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:53 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:54 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:55 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:56 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 22 09:57:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:57 np0005592158 ceph-mon[81715]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.022553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879022862, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1371, "num_deletes": 250, "total_data_size": 2525957, "memory_usage": 2569128, "flush_reason": "Manual Compaction"}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 22 09:57:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:57:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:59.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879031225, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1060655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80482, "largest_seqno": 81848, "table_properties": {"data_size": 1056089, "index_size": 1833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14035, "raw_average_key_size": 21, "raw_value_size": 1045330, "raw_average_value_size": 1615, "num_data_blocks": 80, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093793, "oldest_key_time": 1769093793, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8501 microseconds, and 4424 cpu microseconds.
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.031262) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1060655 bytes OK
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.031278) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032629) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032641) EVENT_LOG_v1 {"time_micros": 1769093879032637, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.032656) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2519302, prev total WAL file size 2519302, number of live WAL files 2.
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.033837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1035KB)], [165(11MB)]
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879033870, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 12679239, "oldest_snapshot_seqno": -1}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 12927 keys, 9372842 bytes, temperature: kUnknown
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879100585, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 9372842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9305576, "index_size": 33873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32325, "raw_key_size": 355122, "raw_average_key_size": 27, "raw_value_size": 9088639, "raw_average_value_size": 703, "num_data_blocks": 1214, "num_entries": 12927, "num_filter_entries": 12927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.101009) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9372842 bytes
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.102837) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 140.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 13406, records dropped: 479 output_compression: NoCompression
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.102868) EVENT_LOG_v1 {"time_micros": 1769093879102853, "job": 106, "event": "compaction_finished", "compaction_time_micros": 66877, "compaction_time_cpu_micros": 27378, "output_level": 6, "num_output_files": 1, "total_output_size": 9372842, "num_input_records": 13406, "num_output_records": 12927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879103368, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879107646, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.033770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:57:59.107820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:00 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592158 ceph-mon[81715]: Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:00 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 22 09:58:01 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:02 np0005592158 podman[242967]: 2026-01-22 14:58:02.087494961 +0000 UTC m=+0.079115856 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:58:02 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:03 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:04 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:04 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:05 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:06 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 22 09:58:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:07 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:08 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:09.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:09 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:09 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:10 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:11.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:11 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:12 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:13.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:13 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:14.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:14 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:15 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:16 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:17 np0005592158 podman[242993]: 2026-01-22 14:58:17.052943745 +0000 UTC m=+0.044028918 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 09:58:17 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:18 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:19 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:20 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:21 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:58:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:58:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:22 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:23.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:24 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:25.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:25 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:25 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:25 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:26 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:27.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:27 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:28 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:29 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:31 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:31.487 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:58:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:31.489 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:58:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:32 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:33.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:33 np0005592158 podman[243313]: 2026-01-22 14:58:33.143209093 +0000 UTC m=+0.110636187 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:58:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:33 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:34 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 09:58:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:35.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:35 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:35.491 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:58:35 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 09:58:36 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:37.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:37 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 09:58:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:38 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:39.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:39 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:40 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:41.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:41 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:42 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:43.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:43 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 09:58:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:44.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:44 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:46 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:47.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:58:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:58:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:58:47.496 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:58:47 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:48 np0005592158 podman[243340]: 2026-01-22 14:58:48.08077068 +0000 UTC m=+0.070204096 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:58:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:48 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:49.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:49 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:49 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:50 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:51.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:51 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:52.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:52 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:53.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:53 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.047963) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934047990, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1057, "num_deletes": 259, "total_data_size": 1746764, "memory_usage": 1778752, "flush_reason": "Manual Compaction"}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934056502, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1147709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81853, "largest_seqno": 82905, "table_properties": {"data_size": 1143055, "index_size": 2113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11768, "raw_average_key_size": 20, "raw_value_size": 1132988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093880, "oldest_key_time": 1769093880, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8592 microseconds, and 3922 cpu microseconds.
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.056553) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1147709 bytes OK
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.056569) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058566) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058589) EVENT_LOG_v1 {"time_micros": 1769093934058582, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.058610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1741350, prev total WAL file size 1741350, number of live WAL files 2.
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.059709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1120KB)], [168(9153KB)]
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934059749, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 10520551, "oldest_snapshot_seqno": -1}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 12967 keys, 10366624 bytes, temperature: kUnknown
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934139023, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10366624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297790, "index_size": 35313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32453, "raw_key_size": 357307, "raw_average_key_size": 27, "raw_value_size": 10078884, "raw_average_value_size": 777, "num_data_blocks": 1270, "num_entries": 12967, "num_filter_entries": 12967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.139317) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10366624 bytes
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.142998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.6 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.9 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.2) write-amplify(9.0) OK, records in: 13502, records dropped: 535 output_compression: NoCompression
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.143049) EVENT_LOG_v1 {"time_micros": 1769093934143028, "job": 108, "event": "compaction_finished", "compaction_time_micros": 79344, "compaction_time_cpu_micros": 49816, "output_level": 6, "num_output_files": 1, "total_output_size": 10366624, "num_input_records": 13502, "num_output_records": 12967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934143865, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934147831, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.059586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:58:54.147921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:55 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:56 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:58:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:58:57 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:58 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:58:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:59.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:59 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:00.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:00 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:01 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:02 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:03.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:04 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592158 podman[243359]: 2026-01-22 14:59:04.170648838 +0000 UTC m=+0.153377771 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:59:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:05 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:05 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:06 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:07 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:08 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:09 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:09 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:10 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:11 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:11.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:12 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:13 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:14 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:14 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:15 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:17.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.315033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957315082, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 554, "num_deletes": 251, "total_data_size": 659738, "memory_usage": 671008, "flush_reason": "Manual Compaction"}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957321917, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 432980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82910, "largest_seqno": 83459, "table_properties": {"data_size": 430246, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7232, "raw_average_key_size": 19, "raw_value_size": 424544, "raw_average_value_size": 1141, "num_data_blocks": 31, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093934, "oldest_key_time": 1769093934, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 6985 microseconds, and 4099 cpu microseconds.
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.322008) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 432980 bytes OK
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.322050) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323429) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323461) EVENT_LOG_v1 {"time_micros": 1769093957323450, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.323497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 656473, prev total WAL file size 656473, number of live WAL files 2.
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.324376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(422KB)], [171(10123KB)]
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957324438, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 10799604, "oldest_snapshot_seqno": -1}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 12828 keys, 9181788 bytes, temperature: kUnknown
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957373928, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 9181788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9114770, "index_size": 33817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 355261, "raw_average_key_size": 27, "raw_value_size": 8898836, "raw_average_value_size": 693, "num_data_blocks": 1203, "num_entries": 12828, "num_filter_entries": 12828, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.374318) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9181788 bytes
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.376030) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.7 rd, 185.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(46.1) write-amplify(21.2) OK, records in: 13339, records dropped: 511 output_compression: NoCompression
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.376061) EVENT_LOG_v1 {"time_micros": 1769093957376047, "job": 110, "event": "compaction_finished", "compaction_time_micros": 49610, "compaction_time_cpu_micros": 24882, "output_level": 6, "num_output_files": 1, "total_output_size": 9181788, "num_input_records": 13339, "num_output_records": 12828, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957376363, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957379859, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.324303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-14:59:17.379979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:18 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:19 np0005592158 podman[243387]: 2026-01-22 14:59:19.11315026 +0000 UTC m=+0.096746492 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 09:59:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:19.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:19 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:19 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:20 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:21 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:22 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:23.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:24 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:25.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:25 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:26 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:27 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:29.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:30 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:32 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:32.546 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:59:32 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:32.548 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:59:32 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:33.549 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:59:33 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:59:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:34 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:59:35 np0005592158 podman[243539]: 2026-01-22 14:59:35.085094186 +0000 UTC m=+0.076445904 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:59:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:35.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:35 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:35 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:36.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:36 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:37.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:37 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:39 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:39.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:40 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:40 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:41 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:41.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:42 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:43 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:44 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:45 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:45 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:46 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:47 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:47.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:59:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:59:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 14:59:47.497 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:59:48 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:49.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:49 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592158 podman[243616]: 2026-01-22 14:59:50.113401866 +0000 UTC m=+0.096358792 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 09:59:50 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:51.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:51 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:52 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:52.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:53.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:53 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:54 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:59:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:59:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:55 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:55 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:56 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:57 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:58 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 09:59:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:59 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:00 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:01 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:02 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:03.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:03 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:04 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:05.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:05 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:05 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:06 np0005592158 podman[243635]: 2026-01-22 15:00:06.101828826 +0000 UTC m=+0.087766409 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 10:00:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:06 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:07.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:07 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:08 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:09 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:10 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:11 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:12 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:13.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:13 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:14 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:15.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:16 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:16 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:17 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:18 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:19 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:21 np0005592158 podman[243662]: 2026-01-22 15:00:21.091626055 +0000 UTC m=+0.070071081 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:00:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:21.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:21 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:23 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:25 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:25 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:26 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:27 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:28 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:29 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:30 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:31 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:32 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:33 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:33.906 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:00:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:33.908 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:00:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:34 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:35 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:00:36 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:00:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:00:36 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:00:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:36.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:36 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:37 np0005592158 podman[243681]: 2026-01-22 15:00:37.119786262 +0000 UTC m=+0.106512225 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:00:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:37 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:38 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:39 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:40 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:40.909 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:00:40 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:41 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:00:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:00:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:42 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:43 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:44 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:45 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592158 ceph-mon[81715]: Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:47 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:00:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:00:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:00:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:00:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:00:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:00:48 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:49 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:50 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:50 np0005592158 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:51 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:52 np0005592158 podman[243887]: 2026-01-22 15:00:52.061854604 +0000 UTC m=+0.053625108 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:00:52 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:53 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:53.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:54 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:55 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:55 np0005592158 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:55.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:56 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:57 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:58 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:00:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:59 np0005592158 ceph-mon[81715]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:01:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:00 np0005592158 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:00 np0005592158 ceph-mon[81715]: Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:01.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:01 np0005592158 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:02 np0005592158 ceph-mon[81715]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:03.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:03 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:04 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:05.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:05 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:05 np0005592158 ceph-mon[81715]: Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:06 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:07 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:08 np0005592158 podman[243918]: 2026-01-22 15:01:08.131797062 +0000 UTC m=+0.114217963 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:01:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:08 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:09 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:10 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:11 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:13 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:14 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:15 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:15.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:16 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:16 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:17 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:01:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:01:19 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:20 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:20 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:21 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:22 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:23 np0005592158 podman[243944]: 2026-01-22 15:01:23.069674242 +0000 UTC m=+0.057703849 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 22 10:01:23 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:24 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:24.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:25 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:25 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:26 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:26.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:27 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:28 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:29 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:30 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:30 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:31 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:32 np0005592158 ceph-mon[81715]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:33.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:33 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:01:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:01:34 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:01:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:35.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:01:36 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:36 np0005592158 ceph-mon[81715]: Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:01:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:01:37 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:38 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:39 np0005592158 podman[243965]: 2026-01-22 15:01:39.131390008 +0000 UTC m=+0.128284432 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:01:39 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:41 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:41 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:42 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:01:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:01:43 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:43.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:44 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:45 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:45 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:45.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:46 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:01:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:01:47 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.498 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:01:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:01:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:01:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:01:48 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:49 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:01:50 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:01:50 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:50 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:01:50.437 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:01:50 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:01:50.440 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:01:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:51 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:52 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:53 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:54 np0005592158 podman[244123]: 2026-01-22 15:01:54.065584717 +0000 UTC m=+0.056309640 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:01:54 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:55 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:55 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592158 ceph-mon[81715]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:57 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:58 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:58.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:01:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:59.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:59 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:00.442 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:02:00 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592158 ceph-mon[81715]: Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:00.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:01.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.908393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121908514, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2446, "num_deletes": 251, "total_data_size": 4781749, "memory_usage": 4875096, "flush_reason": "Manual Compaction"}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121925744, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3108278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83465, "largest_seqno": 85905, "table_properties": {"data_size": 3099212, "index_size": 5239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23322, "raw_average_key_size": 21, "raw_value_size": 3079129, "raw_average_value_size": 2822, "num_data_blocks": 225, "num_entries": 1091, "num_filter_entries": 1091, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093958, "oldest_key_time": 1769093958, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 17333 microseconds, and 7797 cpu microseconds.
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.925780) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3108278 bytes OK
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.925797) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927553) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927567) EVENT_LOG_v1 {"time_micros": 1769094121927563, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.927583) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4770604, prev total WAL file size 4770604, number of live WAL files 2.
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.928839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3035KB)], [174(8966KB)]
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121928895, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 12290066, "oldest_snapshot_seqno": -1}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 13402 keys, 10597459 bytes, temperature: kUnknown
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121984398, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 10597459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10526019, "index_size": 36831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 369013, "raw_average_key_size": 27, "raw_value_size": 10299313, "raw_average_value_size": 768, "num_data_blocks": 1325, "num_entries": 13402, "num_filter_entries": 13402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.984737) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10597459 bytes
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.986063) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.1 rd, 190.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 13919, records dropped: 517 output_compression: NoCompression
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.986082) EVENT_LOG_v1 {"time_micros": 1769094121986073, "job": 112, "event": "compaction_finished", "compaction_time_micros": 55588, "compaction_time_cpu_micros": 25663, "output_level": 6, "num_output_files": 1, "total_output_size": 10597459, "num_input_records": 13919, "num_output_records": 13402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121986916, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121988986, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.928765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:01 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:02:01.989027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:02 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:03.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:03 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:04.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:05 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:05.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:06 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:06 np0005592158 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:06.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:07 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:07 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:08 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:09.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:09 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:10 np0005592158 podman[244192]: 2026-01-22 15:02:10.136624725 +0000 UTC m=+0.114374677 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:02:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:11 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:11 np0005592158 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:11.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:12 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:12.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:13 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:13 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:14 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:15 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:15 np0005592158 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:16 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:17.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:18 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:19 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:19.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:20 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:20 np0005592158 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:20.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:22 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:23 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:23 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:24 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:24.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:25 np0005592158 podman[244220]: 2026-01-22 15:02:25.057576447 +0000 UTC m=+0.053544527 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:02:25 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:25 np0005592158 ceph-mon[81715]: Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:26 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:28 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:28.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:29 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:29 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:30 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:31.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:31 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:32.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:33 np0005592158 ceph-mon[81715]: 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:02:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:33.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:34 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:35 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:35 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:36 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:37 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:37.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:38 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:39 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:40 np0005592158 ceph-mon[81715]: 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:40 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:41 np0005592158 podman[244241]: 2026-01-22 15:02:41.072966704 +0000 UTC m=+0.067803931 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:02:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:41.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:42 np0005592158 ceph-mon[81715]: 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:02:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:43 np0005592158 ceph-mon[81715]: 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:02:43 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:44 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 10:02:45 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4231267640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 10:02:45 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:45 np0005592158 ceph-mon[81715]: Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:47 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.499 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:02:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.500 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:02:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:47.500 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:02:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:47.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 10:02:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 10:02:49 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:49 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:50 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 10:02:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:52 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:52 np0005592158 ceph-mon[81715]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:02:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:53 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:54 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:55.270 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:02:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:02:55.272 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:02:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:56 np0005592158 podman[244268]: 2026-01-22 15:02:56.071624953 +0000 UTC m=+0.059288412 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:02:56 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:56 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:57.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:57 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:02:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:02:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:58 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:02:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:02:58 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:02:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:02:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:02:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:59.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:59 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 10:03:00 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:01 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:01.274 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:01.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:01 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:03.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:03 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:04 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:06 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:06 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:07 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:07 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:08 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:09 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:10 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:11 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:12 np0005592158 podman[244468]: 2026-01-22 15:03:12.102553639 +0000 UTC m=+0.096372542 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:03:12 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:13 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:14 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:15 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:15 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:16 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:17 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:18 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:18.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:20 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:21 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:21 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:22 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:22.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:23.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:24 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:25 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:25 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:26 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:27 np0005592158 podman[244496]: 2026-01-22 15:03:27.057901059 +0000 UTC m=+0.052968130 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:03:27 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:28 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:29 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:29.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.089194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210089249, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 257, "total_data_size": 2542825, "memory_usage": 2590016, "flush_reason": "Manual Compaction"}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210106283, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1671345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85910, "largest_seqno": 87284, "table_properties": {"data_size": 1665616, "index_size": 2868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14497, "raw_average_key_size": 20, "raw_value_size": 1653115, "raw_average_value_size": 2361, "num_data_blocks": 124, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094122, "oldest_key_time": 1769094122, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 17144 microseconds, and 8837 cpu microseconds.
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106342) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1671345 bytes OK
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106365) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108005) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108022) EVENT_LOG_v1 {"time_micros": 1769094210108016, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2536071, prev total WAL file size 2536071, number of live WAL files 2.
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303232' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end)
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1632KB)], [177(10MB)]
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210108972, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 12268804, "oldest_snapshot_seqno": -1}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 13571 keys, 12122300 bytes, temperature: kUnknown
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210172859, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 12122300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12048170, "index_size": 39073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33989, "raw_key_size": 374189, "raw_average_key_size": 27, "raw_value_size": 11816867, "raw_average_value_size": 870, "num_data_blocks": 1415, "num_entries": 13571, "num_filter_entries": 13571, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.173329) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12122300 bytes
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.175148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.6 rd, 189.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 14102, records dropped: 531 output_compression: NoCompression
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.175191) EVENT_LOG_v1 {"time_micros": 1769094210175175, "job": 114, "event": "compaction_finished", "compaction_time_micros": 64025, "compaction_time_cpu_micros": 30047, "output_level": 6, "num_output_files": 1, "total_output_size": 12122300, "num_input_records": 14102, "num_output_records": 13571, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210176214, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210177867, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:03:30.177958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:31.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:31 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:33.238 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:03:33 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:33.239 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:03:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:33 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:35.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:35 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:35 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:36 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:37.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:37 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:38 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:39.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:39 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:40.241 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:40 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:41.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:41 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:43 np0005592158 podman[244516]: 2026-01-22 15:03:43.130521115 +0000 UTC m=+0.102563947 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:03:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:43.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:43 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:45 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:45 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:46 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.501 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.501 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:03:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:03:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:03:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:47 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:48 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:49.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:50 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:51 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:51 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:51.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:52 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:53 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:54 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:55 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:55 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:03:56 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:57 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:03:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:03:58 np0005592158 podman[244543]: 2026-01-22 15:03:58.071907887 +0000 UTC m=+0.064770091 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 10:03:58 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:59 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:03:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:03:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:59.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:00 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:00 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:01 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:01.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:02 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:03 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:04 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:05 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:05 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:04:06 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:07 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:07.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:08 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:09 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:09.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:10 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:10 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:11 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:12 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:13 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:14 np0005592158 podman[244861]: 2026-01-22 15:04:14.121896918 +0000 UTC m=+0.117524562 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:04:14 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:15 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:16 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:17.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:17 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:18 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:19.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:19 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:21 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:22 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:23.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:23 np0005592158 ceph-mon[81715]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:24 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:25 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592158 ceph-mon[81715]: Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:26 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:27 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:28 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:29 np0005592158 podman[244887]: 2026-01-22 15:04:29.117846741 +0000 UTC m=+0.101256840 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:04:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:29 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:31 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:31 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:32 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:33.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:33 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:34 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:04:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.5 total, 600.0 interval#012Cumulative writes: 14K writes, 45K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 14K writes, 4803 syncs, 3.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1281 writes, 2798 keys, 1281 commit groups, 1.0 writes per commit group, ingest: 1.61 MB, 0.00 MB/s#012Interval WAL: 1281 writes, 604 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:04:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:35 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:35 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:36 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:37.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:37 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:38 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:39 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:39.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:40 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:40 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:41 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:42 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:43 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:44 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:45 np0005592158 podman[244906]: 2026-01-22 15:04:45.111295907 +0000 UTC m=+0.100052347 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:04:45 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:45 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:46 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:47 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:04:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:04:47.502 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:04:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:48 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.512956) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289512987, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1341, "num_deletes": 251, "total_data_size": 2359698, "memory_usage": 2388904, "flush_reason": "Manual Compaction"}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289525794, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1539104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87289, "largest_seqno": 88625, "table_properties": {"data_size": 1533722, "index_size": 2585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13910, "raw_average_key_size": 20, "raw_value_size": 1522007, "raw_average_value_size": 2275, "num_data_blocks": 111, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094210, "oldest_key_time": 1769094210, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 13039 microseconds, and 5013 cpu microseconds.
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525990) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1539104 bytes OK
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.526087) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527343) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527365) EVENT_LOG_v1 {"time_micros": 1769094289527357, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.527387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2353180, prev total WAL file size 2353180, number of live WAL files 2.
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.529217) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1503KB)], [180(11MB)]
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289529257, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13661404, "oldest_snapshot_seqno": -1}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 13723 keys, 11987791 bytes, temperature: kUnknown
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289637361, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11987791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11912960, "index_size": 39390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34373, "raw_key_size": 378529, "raw_average_key_size": 27, "raw_value_size": 11679362, "raw_average_value_size": 851, "num_data_blocks": 1424, "num_entries": 13723, "num_filter_entries": 13723, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.637704) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11987791 bytes
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.639218) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.3 rd, 110.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 14240, records dropped: 517 output_compression: NoCompression
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.639240) EVENT_LOG_v1 {"time_micros": 1769094289639230, "job": 116, "event": "compaction_finished", "compaction_time_micros": 108185, "compaction_time_cpu_micros": 54446, "output_level": 6, "num_output_files": 1, "total_output_size": 11987791, "num_input_records": 14240, "num_output_records": 13723, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289639726, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289642119, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.529145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:04:49.642291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:50 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:50 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:51 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:52 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:53 np0005592158 ceph-mon[81715]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:54 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:55 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592158 ceph-mon[81715]: Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:56 np0005592158 ceph-mon[81715]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:57 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:58 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:59.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:04:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:59 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592158 podman[244932]: 2026-01-22 15:05:00.095423224 +0000 UTC m=+0.081230217 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:05:00 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592158 ceph-mon[81715]: Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:01.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:01 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:02 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:03 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:04 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:05.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:05.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:05 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:06 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:07.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:07.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:07 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:08 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:09 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:12 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:13 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:13 np0005592158 podman[245222]: 2026-01-22 15:05:13.975091617 +0000 UTC m=+0.041978621 container create e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 10:05:14 np0005592158 systemd[1]: Started libpod-conmon-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope.
Jan 22 10:05:14 np0005592158 systemd[1]: Started libcrun container.
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:13.953094721 +0000 UTC m=+0.019981735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:14.060746393 +0000 UTC m=+0.127633417 container init e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:14 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:14.06765094 +0000 UTC m=+0.134537934 container start e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:14.071416133 +0000 UTC m=+0.138303127 container attach e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:05:14 np0005592158 focused_thompson[245238]: 167 167
Jan 22 10:05:14 np0005592158 systemd[1]: libpod-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope: Deactivated successfully.
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:14.073626853 +0000 UTC m=+0.140513847 container died e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:05:14 np0005592158 systemd[1]: var-lib-containers-storage-overlay-5c97f77ababb134115cbeb6a3f514d75364e9ba9fbcd9ee372bd97ef75dc4cb1-merged.mount: Deactivated successfully.
Jan 22 10:05:14 np0005592158 podman[245222]: 2026-01-22 15:05:14.109746603 +0000 UTC m=+0.176633597 container remove e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:05:14 np0005592158 systemd[1]: libpod-conmon-e4a3064b3c5c4fd1736382fdeaa45a5af750391d56cdd0d2c8fc786a0d2f1854.scope: Deactivated successfully.
Jan 22 10:05:14 np0005592158 podman[245263]: 2026-01-22 15:05:14.259993453 +0000 UTC m=+0.040083409 container create 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 10:05:14 np0005592158 systemd[1]: Started libpod-conmon-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope.
Jan 22 10:05:14 np0005592158 systemd[1]: Started libcrun container.
Jan 22 10:05:14 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:14 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:14 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:14 np0005592158 podman[245263]: 2026-01-22 15:05:14.242331474 +0000 UTC m=+0.022421490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:05:14 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:05:14 np0005592158 podman[245263]: 2026-01-22 15:05:14.354480419 +0000 UTC m=+0.134570415 container init 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:14 np0005592158 podman[245263]: 2026-01-22 15:05:14.360511302 +0000 UTC m=+0.140601268 container start 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 10:05:14 np0005592158 podman[245263]: 2026-01-22 15:05:14.364064398 +0000 UTC m=+0.144154374 container attach 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:05:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:15.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:15 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:15.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:16 np0005592158 podman[245567]: 2026-01-22 15:05:16.100999649 +0000 UTC m=+0.084719451 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]: [
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:    {
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "available": false,
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "ceph_device": false,
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "lsm_data": {},
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "lvs": [],
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "path": "/dev/sr0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "rejected_reasons": [
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "Has a FileSystem",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "Insufficient space (<5GB)"
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        ],
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        "sys_api": {
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "actuators": null,
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "device_nodes": "sr0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "devname": "sr0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "human_readable_size": "482.00 KB",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "id_bus": "ata",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "model": "QEMU DVD-ROM",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "nr_requests": "2",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "parent": "/dev/sr0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "partitions": {},
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "path": "/dev/sr0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "removable": "1",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "rev": "2.5+",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "ro": "0",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "rotational": "1",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "sas_address": "",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "sas_device_handle": "",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "scheduler_mode": "mq-deadline",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "sectors": 0,
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "sectorsize": "2048",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "size": 493568.0,
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "support_discard": "2048",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "type": "disk",
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:            "vendor": "QEMU"
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:        }
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]:    }
Jan 22 10:05:16 np0005592158 thirsty_sinoussi[245280]: ]
Jan 22 10:05:16 np0005592158 systemd[1]: libpod-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Deactivated successfully.
Jan 22 10:05:16 np0005592158 systemd[1]: libpod-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Consumed 1.118s CPU time.
Jan 22 10:05:16 np0005592158 podman[245263]: 2026-01-22 15:05:16.175947964 +0000 UTC m=+1.956037920 container died 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:05:16 np0005592158 systemd[1]: var-lib-containers-storage-overlay-ca6434bf6df70098082a5675cce29c6f10e4ef20cc80a36989e40fe3dd0d0753-merged.mount: Deactivated successfully.
Jan 22 10:05:16 np0005592158 podman[245263]: 2026-01-22 15:05:16.225711016 +0000 UTC m=+2.005800972 container remove 872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:05:16 np0005592158 systemd[1]: libpod-conmon-872e7c83e24da6c87b315c076805d1e21a72427b87e8f5729635b7956dbdb5f3.scope: Deactivated successfully.
Jan 22 10:05:16 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:16 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:05:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:17.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:18 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:19.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:20 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:21 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:21 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:21.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:22 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:23.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:23 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:23.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:25.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:25 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:26 np0005592158 ceph-mon[81715]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:27 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:28 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:29.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:29 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592158 ceph-mon[81715]: Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:31 np0005592158 podman[246509]: 2026-01-22 15:05:31.109998106 +0000 UTC m=+0.085981316 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 22 10:05:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:31.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:05:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 16K writes, 89K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1887 writes, 9766 keys, 1887 commit groups, 1.0 writes per commit group, ingest: 16.87 MB, 0.03 MB/s#012Interval WAL: 1887 writes, 1887 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.2      1.36              0.32        58    0.023       0      0       0.0       0.0#012  L6      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.6    143.2    123.7      4.38              1.59        57    0.077    549K    30K       0.0       0.0#012 Sum      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.6    109.3    111.3      5.74              1.91       115    0.050    549K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    138.2    140.4      0.59              0.29        14    0.042     95K   3607       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    143.2    123.7      4.38              1.59        57    0.077    549K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.3      1.36              0.32        57    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.094, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.62 GB write, 0.12 MB/s write, 0.61 GB read, 0.12 MB/s read, 5.7 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 67.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000458 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3559,64.19 MB,21.1146%) FilterBlock(115,1.48 MB,0.487152%) IndexBlock(115,1.93 MB,0.633526%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:05:31 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:32 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:34 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:35 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:35.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:37 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:38 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:39 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:40 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:40 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:41.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:42 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:43.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:44 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:45 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:46 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:47 np0005592158 podman[246528]: 2026-01-22 15:05:47.104474322 +0000 UTC m=+0.095705590 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:05:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:05:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:05:47.504 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:05:47 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:47.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:48 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:49.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:49 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:49.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:50 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:51.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:51 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:53.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:54 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:55.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:56 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:56 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:57 np0005592158 ceph-mon[81715]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:57.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:58 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:59 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:05:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:59.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:00 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592158 ceph-mon[81715]: Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:01 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:01.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:02 np0005592158 podman[246556]: 2026-01-22 15:06:02.054310242 +0000 UTC m=+0.048723863 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:03.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:03 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:04 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:05.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:05 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:05.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:06 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:07 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:08 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:09 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:09.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:10 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:11 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:06:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:11.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:12 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:13 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:14 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:15 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:16 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:17.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:17 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:18 np0005592158 podman[246575]: 2026-01-22 15:06:18.095218938 +0000 UTC m=+0.085530833 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:06:18 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:19.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.187630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380187707, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 250, "total_data_size": 2768771, "memory_usage": 2825176, "flush_reason": "Manual Compaction"}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380197139, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1191330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88630, "largest_seqno": 90098, "table_properties": {"data_size": 1186443, "index_size": 2090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14800, "raw_average_key_size": 21, "raw_value_size": 1174975, "raw_average_value_size": 1727, "num_data_blocks": 89, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094290, "oldest_key_time": 1769094290, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9529 microseconds, and 3767 cpu microseconds.
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.197169) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1191330 bytes OK
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.197185) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198455) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198472) EVENT_LOG_v1 {"time_micros": 1769094380198467, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.198488) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 2761702, prev total WAL file size 2761702, number of live WAL files 2.
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.199422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1163KB)], [183(11MB)]
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380199563, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13179121, "oldest_snapshot_seqno": -1}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 13925 keys, 9876851 bytes, temperature: kUnknown
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380298985, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 9876851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804646, "index_size": 36316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34821, "raw_key_size": 383465, "raw_average_key_size": 27, "raw_value_size": 9571254, "raw_average_value_size": 687, "num_data_blocks": 1296, "num_entries": 13925, "num_filter_entries": 13925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.299452) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9876851 bytes
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.301014) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.3 rd, 99.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.3) OK, records in: 14403, records dropped: 478 output_compression: NoCompression
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.301047) EVENT_LOG_v1 {"time_micros": 1769094380301032, "job": 118, "event": "compaction_finished", "compaction_time_micros": 99595, "compaction_time_cpu_micros": 53563, "output_level": 6, "num_output_files": 1, "total_output_size": 9876851, "num_input_records": 14403, "num_output_records": 13925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380301955, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380306437, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.199272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:06:20.306584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:21 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:21 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:22 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:23.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:24 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:24 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:24 np0005592158 podman[246773]: 2026-01-22 15:06:24.919276123 +0000 UTC m=+0.065474669 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:25 np0005592158 podman[246773]: 2026-01-22 15:06:25.012967317 +0000 UTC m=+0.159165773 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 22 10:06:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:25 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:26 np0005592158 ceph-mon[81715]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:06:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:06:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:28 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:28.234 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:06:28 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:28.236 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:06:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:28 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:29 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:30 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:30 np0005592158 ceph-mon[81715]: Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:31 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:32 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:32.239 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:06:32 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592158 podman[247032]: 2026-01-22 15:06:33.101736929 +0000 UTC m=+0.075465620 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 22 10:06:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:33 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:34 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:35.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:35 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592158 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:37 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:38 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:39 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:40 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592158 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:41 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:42 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:44 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:45 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592158 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:46 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:47.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.505 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.506 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:06:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:06:47.506 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:06:47 np0005592158 ceph-mon[81715]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:48 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:49 np0005592158 podman[247101]: 2026-01-22 15:06:49.09713296 +0000 UTC m=+0.086924022 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:06:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:49 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592158 ceph-mon[81715]: Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:52 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:53.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:53.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:54 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:54 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:55.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:55 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:55.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:56 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:57.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:57 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:58 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:59 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:06:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:00 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:01 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:02 np0005592158 ceph-mon[81715]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:07:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:03 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:03.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:04 np0005592158 systemd[1]: Starting dnf makecache...
Jan 22 10:07:04 np0005592158 podman[247129]: 2026-01-22 15:07:04.078499885 +0000 UTC m=+0.065431227 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:07:04 np0005592158 dnf[247130]: Metadata cache refreshed recently.
Jan 22 10:07:04 np0005592158 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 10:07:04 np0005592158 systemd[1]: Finished dnf makecache.
Jan 22 10:07:04 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:05.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:06 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:06 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:07 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:07.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:08 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:09 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:09.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:10 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:11 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:11.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:11.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:12 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:13.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 22 10:07:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:13.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 22 10:07:14 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:15 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:15 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:16 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:17 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:18 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:19 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:20 np0005592158 podman[247149]: 2026-01-22 15:07:20.08962091 +0000 UTC m=+0.078222428 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:07:20 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:20 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:21 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:23.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:23 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:24 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:25.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:25 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592158 ceph-mon[81715]: Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 10:07:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 10:07:26 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:27.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:27 np0005592158 ceph-mon[81715]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:27.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:28 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:29.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:29.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:30 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:31.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:31 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592158 ceph-mon[81715]: Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:31.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:32 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:32 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 10:07:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:33.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.944897) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453944926, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 306, "total_data_size": 2194411, "memory_usage": 2236688, "flush_reason": "Manual Compaction"}
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453955821, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1441669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90103, "largest_seqno": 91369, "table_properties": {"data_size": 1436443, "index_size": 2429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14759, "raw_average_key_size": 21, "raw_value_size": 1424591, "raw_average_value_size": 2076, "num_data_blocks": 104, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 306, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094381, "oldest_key_time": 1769094381, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 10976 microseconds, and 4734 cpu microseconds.
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.955866) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1441669 bytes OK
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.955890) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957577) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957593) EVENT_LOG_v1 {"time_micros": 1769094453957588, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2187944, prev total WAL file size 2187944, number of live WAL files 2.
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.958568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(1407KB)], [186(9645KB)]
Jan 22 10:07:33 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453958632, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 11318520, "oldest_snapshot_seqno": -1}
Jan 22 10:07:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 13980 keys, 9685693 bytes, temperature: kUnknown
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454021751, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 9685693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9613091, "index_size": 36521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 385356, "raw_average_key_size": 27, "raw_value_size": 9378827, "raw_average_value_size": 670, "num_data_blocks": 1301, "num_entries": 13980, "num_filter_entries": 13980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.021979) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9685693 bytes
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.023049) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.2 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 14611, records dropped: 631 output_compression: NoCompression
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.023065) EVENT_LOG_v1 {"time_micros": 1769094454023057, "job": 120, "event": "compaction_finished", "compaction_time_micros": 63177, "compaction_time_cpu_micros": 30736, "output_level": 6, "num_output_files": 1, "total_output_size": 9685693, "num_input_records": 14611, "num_output_records": 13980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454023348, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454025278, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:33.958462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:07:34.025404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:35 np0005592158 podman[247305]: 2026-01-22 15:07:35.098803843 +0000 UTC m=+0.086574244 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 10:07:35 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:35.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:35.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:07:37 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:37.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:38 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:39 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:40 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:40 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:41 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:41.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:43 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:44 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 10:07:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:46 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:46 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:47 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.507 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:07:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:07:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:48 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 10:07:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:49 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:50 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 10:07:51 np0005592158 podman[247373]: 2026-01-22 15:07:51.113699129 +0000 UTC m=+0.087686463 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:07:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:52 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:52 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:53 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:54 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 10:07:55 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:56.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:57 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:57 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.625 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:07:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.625 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:07:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:07:57.626 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:07:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:58 np0005592158 ceph-mon[81715]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:58 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:07:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:59 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:00.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:01.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:01 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:01 np0005592158 ceph-mon[81715]: Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:03 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:03.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:04.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:04 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:05.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:05 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:06.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:06 np0005592158 podman[247400]: 2026-01-22 15:08:06.05053796 +0000 UTC m=+0.043818346 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 22 10:08:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 10:08:07 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:07 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:08 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:09.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:09 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.621360) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490621430, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 841, "num_deletes": 329, "total_data_size": 1268789, "memory_usage": 1293632, "flush_reason": "Manual Compaction"}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490628106, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 822633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91374, "largest_seqno": 92210, "table_properties": {"data_size": 818649, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11369, "raw_average_key_size": 21, "raw_value_size": 809707, "raw_average_value_size": 1502, "num_data_blocks": 65, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 329, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094454, "oldest_key_time": 1769094454, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 6802 microseconds, and 3107 cpu microseconds.
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.628158) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 822633 bytes OK
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.628180) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629459) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629474) EVENT_LOG_v1 {"time_micros": 1769094490629470, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1263977, prev total WAL file size 1263977, number of live WAL files 2.
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.630009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353331' seq:0, type:0; will stop at (end)
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(803KB)], [189(9458KB)]
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490630092, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 10508326, "oldest_snapshot_seqno": -1}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 13846 keys, 10339873 bytes, temperature: kUnknown
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490696720, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 10339873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10267025, "index_size": 37151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34629, "raw_key_size": 383323, "raw_average_key_size": 27, "raw_value_size": 10033735, "raw_average_value_size": 724, "num_data_blocks": 1325, "num_entries": 13846, "num_filter_entries": 13846, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.696969) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10339873 bytes
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.699046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 155.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.3) write-amplify(12.6) OK, records in: 14519, records dropped: 673 output_compression: NoCompression
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.699071) EVENT_LOG_v1 {"time_micros": 1769094490699060, "job": 122, "event": "compaction_finished", "compaction_time_micros": 66680, "compaction_time_cpu_micros": 36484, "output_level": 6, "num_output_files": 1, "total_output_size": 10339873, "num_input_records": 14519, "num_output_records": 13846, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490699342, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490701058, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.629881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:08:10.701130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:11 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:11 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:11.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:12 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:13.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:13 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:15 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 10:08:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 10:08:16 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:16 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:17.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:18.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:08:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:08:19 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:19.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:20 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:20 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:21.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:22.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:22 np0005592158 podman[247420]: 2026-01-22 15:08:22.102419258 +0000 UTC m=+0.085103494 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:08:22 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:22 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:23.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:23 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:23 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:24.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:25 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:25.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:27 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:27 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:29 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:30.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:30 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:30 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:32.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:32 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:32 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:33 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:34.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:34 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:34 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:36 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:36 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:37 np0005592158 podman[247447]: 2026-01-22 15:08:37.076707003 +0000 UTC m=+0.070117131 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:08:37 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:37.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:38 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:40.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:40 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:42 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:43 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:43 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:44 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:44 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:45 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:08:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:46 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:08:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:47 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.508 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.509 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:08:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:08:47.509 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:08:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:48 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:49 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:50.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:51 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:51 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:52.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:52 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:52 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:53 np0005592158 podman[247720]: 2026-01-22 15:08:53.121399877 +0000 UTC m=+0.103675101 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 10:08:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:53 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:54.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:54 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:56 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:56 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:57 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:08:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:08:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:58 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:58 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:08:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:00.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:00 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:01 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:02.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:02 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:03 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:03.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:04.104 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:09:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:04.105 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:09:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:04 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:05.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:06 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:06 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:07 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:07.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:08 np0005592158 podman[247796]: 2026-01-22 15:09:08.047455078 +0000 UTC m=+0.042611683 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:09:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:08 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:09.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:10 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:11.107 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:09:11 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:11 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:11.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:12 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:13 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:13.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:14 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:15.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:16 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:17 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:18 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:19 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:20 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:22 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:23 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:24 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:24 np0005592158 podman[247815]: 2026-01-22 15:09:24.133597234 +0000 UTC m=+0.111539222 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:09:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:25 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:27 np0005592158 ceph-mon[81715]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:27 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:27 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:27.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:28 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:28 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:31 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:31 np0005592158 ceph-mon[81715]: Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:31.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:32 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:33 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:33.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:34 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:34.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:35 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:36.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:36 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:36 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:37 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:38 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:39 np0005592158 podman[247842]: 2026-01-22 15:09:39.056452661 +0000 UTC m=+0.052325985 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 10:09:39 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:40 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:41.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:42.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:42 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:43 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:44.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:44 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:45.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:46 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:09:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:09:47.510 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:09:47 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:48 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:50 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:50 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:51 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:51.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:52 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:53.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:54 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:55 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:55 np0005592158 podman[247862]: 2026-01-22 15:09:55.171409349 +0000 UTC m=+0.161140601 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:09:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:56 np0005592158 ceph-mon[81715]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:56 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:57 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:57.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:58 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:59 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:09:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:00.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:00 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592158 ceph-mon[81715]: Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:01.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:02.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:10:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:04 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:05.009 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:10:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:05.010 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:10:05 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:06.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:06 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:06 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:07 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:08.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:08 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:09 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:10 np0005592158 podman[248068]: 2026-01-22 15:10:10.135172831 +0000 UTC m=+0.116105265 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:10:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:10 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:10 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:11.012 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:10:11 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:12 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:13.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:14.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:14 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:16 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:16 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:18 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:19 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:20.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:20 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:22 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:22 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:23 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:24.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:25 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:26 np0005592158 podman[248089]: 2026-01-22 15:10:26.115582902 +0000 UTC m=+0.094597118 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:10:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:26 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:26 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.382934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627382971, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2234, "num_deletes": 487, "total_data_size": 4150784, "memory_usage": 4221232, "flush_reason": "Manual Compaction"}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627398018, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 2692767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92215, "largest_seqno": 94444, "table_properties": {"data_size": 2684212, "index_size": 4536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26787, "raw_average_key_size": 22, "raw_value_size": 2663684, "raw_average_value_size": 2282, "num_data_blocks": 194, "num_entries": 1167, "num_filter_entries": 1167, "num_deletions": 487, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094490, "oldest_key_time": 1769094490, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 15124 microseconds, and 6112 cpu microseconds.
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398058) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 2692767 bytes OK
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398073) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399180) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399191) EVENT_LOG_v1 {"time_micros": 1769094627399188, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.399206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 4139543, prev total WAL file size 4139543, number of live WAL files 2.
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(2629KB)], [192(10097KB)]
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627400188, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 13032640, "oldest_snapshot_seqno": -1}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 14022 keys, 11311499 bytes, temperature: kUnknown
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627472487, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 11311499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11236308, "index_size": 39046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386744, "raw_average_key_size": 27, "raw_value_size": 10998669, "raw_average_value_size": 784, "num_data_blocks": 1405, "num_entries": 14022, "num_filter_entries": 14022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.472916) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11311499 bytes
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.474288) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 156.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 15013, records dropped: 991 output_compression: NoCompression
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.474330) EVENT_LOG_v1 {"time_micros": 1769094627474304, "job": 124, "event": "compaction_finished", "compaction_time_micros": 72412, "compaction_time_cpu_micros": 32175, "output_level": 6, "num_output_files": 1, "total_output_size": 11311499, "num_input_records": 15013, "num_output_records": 14022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627475428, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627478830, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.400012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:10:27.478908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:28 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:29 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:30 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:32 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:33 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:34.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:35 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:36 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:36 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:36.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:37 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:38 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:39 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:40.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:40 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:41 np0005592158 podman[248116]: 2026-01-22 15:10:41.107528441 +0000 UTC m=+0.086725416 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 22 10:10:41 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:41 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:42.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:42 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:44 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:46 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:10:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:10:47.511 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:10:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:48 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:48 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:49 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:50.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:50 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:50.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:51 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:52.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:52 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:52 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:52.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:53 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:54.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:54 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:55 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:56 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:57 np0005592158 podman[248137]: 2026-01-22 15:10:57.119309234 +0000 UTC m=+0.105091679 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:10:57 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:57 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:58.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:58 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:10:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:58.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:59 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:00.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:00.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:00 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:01 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:02.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:04.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:04 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:04 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.832 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.833 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:05 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:05.833 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:11:05 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:06.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:07 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:07 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:08.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:08 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:09 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:10.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:10 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:11:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:11:11 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:12 np0005592158 podman[248291]: 2026-01-22 15:11:12.056475872 +0000 UTC m=+0.048987324 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:11:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:12.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:12 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:12 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:14 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:14.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:14.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:15 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:16.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:16 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:16.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:17 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:18.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:18.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:18 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:19 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:20.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:20.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:20 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:21 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:22.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:23 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:23 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:24 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:11:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:11:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:24 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:26.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:27 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:28 np0005592158 podman[248362]: 2026-01-22 15:11:28.105047002 +0000 UTC m=+0.083988543 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:11:28 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:29 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:30 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:32 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:33 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:34.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:34 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:35 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:36.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:36 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592158 ceph-mon[81715]: Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:38.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:38 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:40 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:41 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:41 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:43 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:43 np0005592158 podman[248389]: 2026-01-22 15:11:43.082866312 +0000 UTC m=+0.060933675 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:11:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:44 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:45 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:46 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:46.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:47 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:47 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.512 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.512 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:11:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:11:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:48.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:48 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:49 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:50.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:50 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:51 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:52 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:53 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:53 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:54.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:54 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:55.455 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:11:55.456 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:56.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:56.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:57 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:58 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:11:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:59 np0005592158 podman[248409]: 2026-01-22 15:11:59.097966436 +0000 UTC m=+0.083507480 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:11:59 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:00 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:02 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:03 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:03.458 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:12:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:05 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:06 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:07 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:07 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:09 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:09 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:10 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:12.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:13 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:13 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:14 np0005592158 podman[248433]: 2026-01-22 15:12:14.092212635 +0000 UTC m=+0.076643446 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 10:12:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:14 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:16.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:16.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:16 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592158 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:18.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:12:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:12:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:12:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:12:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:19 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:12:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:19 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:12:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:20.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:20 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:22.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:22.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:23 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:23 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:24.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:24 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592158 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:25 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:25 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:27 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:28 np0005592158 ceph-mon[81715]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:28 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:29 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:30 np0005592158 podman[248584]: 2026-01-22 15:12:30.098433311 +0000 UTC m=+0.094211167 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:12:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:32.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:32 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:32.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:34 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:35 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:37 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:37 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:38 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:40 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:41 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:42.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:42 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:43 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:43 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:45 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:45 np0005592158 podman[248660]: 2026-01-22 15:12:45.059433128 +0000 UTC m=+0.050160696 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 10:12:46 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:47 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:12:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:47.513 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:12:48 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:48.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:49 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:50 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:51 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:52.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:53 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:53 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:55 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:56.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:56 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:57.172 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:12:57 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:12:57.174 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:12:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:12:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:58.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:58 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:58 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:59 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:59 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:00.176 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:13:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:00.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:01 np0005592158 podman[248679]: 2026-01-22 15:13:01.104586217 +0000 UTC m=+0.095502272 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 22 10:13:01 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:02.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:02 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:03 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:05 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:06 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:07 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:07 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:08 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:10.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:10.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:10 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:12.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:12 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:13 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:14.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:14 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:15 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:16 np0005592158 podman[248706]: 2026-01-22 15:13:16.096047662 +0000 UTC m=+0.088458893 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 10:13:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:17 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:17 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:18 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:18.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:19 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:20 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.100442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802100595, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 2595, "num_deletes": 544, "total_data_size": 4779188, "memory_usage": 4862240, "flush_reason": "Manual Compaction"}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802124075, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 3125853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94449, "largest_seqno": 97039, "table_properties": {"data_size": 3116184, "index_size": 5202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30539, "raw_average_key_size": 22, "raw_value_size": 3092778, "raw_average_value_size": 2316, "num_data_blocks": 224, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 544, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094628, "oldest_key_time": 1769094628, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 23622 microseconds, and 10811 cpu microseconds.
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.124143) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 3125853 bytes OK
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.124173) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125757) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125771) EVENT_LOG_v1 {"time_micros": 1769094802125766, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.125795) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4766263, prev total WAL file size 4766263, number of live WAL files 2.
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.127193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353330' seq:72057594037927935, type:22 .. '6C6F676D0034373834' seq:0, type:0; will stop at (end)
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(3052KB)], [195(10MB)]
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802127317, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14437352, "oldest_snapshot_seqno": -1}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 14256 keys, 14224617 bytes, temperature: kUnknown
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802196842, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 14224617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14144748, "index_size": 43148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 391448, "raw_average_key_size": 27, "raw_value_size": 13900208, "raw_average_value_size": 975, "num_data_blocks": 1579, "num_entries": 14256, "num_filter_entries": 14256, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.197128) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 14224617 bytes
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.198906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.5 rd, 204.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 15357, records dropped: 1101 output_compression: NoCompression
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.198936) EVENT_LOG_v1 {"time_micros": 1769094802198922, "job": 126, "event": "compaction_finished", "compaction_time_micros": 69567, "compaction_time_cpu_micros": 33493, "output_level": 6, "num_output_files": 1, "total_output_size": 14224617, "num_input_records": 15357, "num_output_records": 14256, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802200036, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802203695, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.126991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:22.203928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:23 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:23 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:24 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:25 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:26 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:27 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:27 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:28.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:28 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:29 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:30.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:30 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:31 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592158 podman[248728]: 2026-01-22 15:13:32.094384184 +0000 UTC m=+0.083115109 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:13:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:32.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:32 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:33 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:34.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.670187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814670221, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 435, "num_deletes": 274, "total_data_size": 343312, "memory_usage": 352712, "flush_reason": "Manual Compaction"}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814673621, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97044, "largest_seqno": 97474, "table_properties": {"data_size": 222373, "index_size": 389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6751, "raw_average_key_size": 19, "raw_value_size": 217350, "raw_average_value_size": 635, "num_data_blocks": 17, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094803, "oldest_key_time": 1769094803, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 3494 microseconds, and 1248 cpu microseconds.
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.673679) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 224727 bytes OK
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.673696) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674755) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674769) EVENT_LOG_v1 {"time_micros": 1769094814674764, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.674784) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 340461, prev total WAL file size 340461, number of live WAL files 2.
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(219KB)], [198(13MB)]
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814675106, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14449344, "oldest_snapshot_seqno": -1}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 14040 keys, 12781736 bytes, temperature: kUnknown
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814753027, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 12781736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12704229, "index_size": 41298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 387535, "raw_average_key_size": 27, "raw_value_size": 12464090, "raw_average_value_size": 887, "num_data_blocks": 1497, "num_entries": 14040, "num_filter_entries": 14040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.753400) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12781736 bytes
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.754862) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.0 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(121.2) write-amplify(56.9) OK, records in: 14598, records dropped: 558 output_compression: NoCompression
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.754878) EVENT_LOG_v1 {"time_micros": 1769094814754870, "job": 128, "event": "compaction_finished", "compaction_time_micros": 78101, "compaction_time_cpu_micros": 45587, "output_level": 6, "num_output_files": 1, "total_output_size": 12781736, "num_input_records": 14598, "num_output_records": 14040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814755338, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814758038, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.675019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:13:34.758272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:13:35 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:36 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:39 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:40 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:41 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:41 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:42 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:43 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:43 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:44 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:45 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:46 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:47 np0005592158 podman[248935]: 2026-01-22 15:13:47.052359492 +0000 UTC m=+0.046208650 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 10:13:47 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:13:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:13:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:48 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:49 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:50.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:50 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:51 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:52 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:52 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:53 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:54.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:54 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:56 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:56.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:58 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:58.307 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:13:58 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:13:58.308 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:13:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:13:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:01 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:02.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:02 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:02 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:03 np0005592158 podman[248954]: 2026-01-22 15:14:03.143324708 +0000 UTC m=+0.125964259 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 10:14:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:03 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:04 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:14:04.311 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:14:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:05 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:06 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:07 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:07 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:08 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:08.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:09 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:10.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:11 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:11 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:12.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:13 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:14.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:14 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:16.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:16 np0005592158 ceph-mon[81715]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:17 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:18 np0005592158 podman[248981]: 2026-01-22 15:14:18.063015629 +0000 UTC m=+0.055944801 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:14:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:18.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:19 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:20 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:20.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:21 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592158 ceph-mon[81715]: Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:23 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:24 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:14:24 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:14:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:14:24 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:14:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:26 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:30 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:33 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:33 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:34 np0005592158 podman[249001]: 2026-01-22 15:14:34.113444708 +0000 UTC m=+0.098733808 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:14:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:34.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:34.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:34 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:14:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.5 total, 600.0 interval#012Cumulative writes: 15K writes, 47K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 15K writes, 5211 syncs, 2.94 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 872 writes, 1903 keys, 872 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 872 writes, 408 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 22 10:14:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:36 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:37 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:38 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:40 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:41 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:42.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:42 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:44.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:14:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:14:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:45 np0005592158 ceph-mon[81715]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:46.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:46.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:46 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.200454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887200494, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1229, "num_deletes": 369, "total_data_size": 1952493, "memory_usage": 1977216, "flush_reason": "Manual Compaction"}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887208697, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 847476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97479, "largest_seqno": 98703, "table_properties": {"data_size": 843078, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15841, "raw_average_key_size": 23, "raw_value_size": 832092, "raw_average_value_size": 1209, "num_data_blocks": 69, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 369, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094814, "oldest_key_time": 1769094814, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 8321 microseconds, and 3573 cpu microseconds.
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.208765) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 847476 bytes OK
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.208789) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209780) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209793) EVENT_LOG_v1 {"time_micros": 1769094887209789, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209810) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1945909, prev total WAL file size 1945909, number of live WAL files 2.
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.210498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(827KB)], [201(12MB)]
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887210557, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 13629212, "oldest_snapshot_seqno": -1}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 14005 keys, 10132259 bytes, temperature: kUnknown
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887288216, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 10132259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10058756, "index_size": 37358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386606, "raw_average_key_size": 27, "raw_value_size": 9823222, "raw_average_value_size": 701, "num_data_blocks": 1335, "num_entries": 14005, "num_filter_entries": 14005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.288476) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10132259 bytes
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.290340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.4 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(28.0) write-amplify(12.0) OK, records in: 14728, records dropped: 723 output_compression: NoCompression
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.290356) EVENT_LOG_v1 {"time_micros": 1769094887290348, "job": 130, "event": "compaction_finished", "compaction_time_micros": 77719, "compaction_time_cpu_micros": 50705, "output_level": 6, "num_output_files": 1, "total_output_size": 10132259, "num_input_records": 14728, "num_output_records": 14005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887290577, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887292611, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.210399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:14:47.292714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.514 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:14:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:14:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592158 ceph-mon[81715]: Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:48.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:48.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:48 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:49 np0005592158 podman[249158]: 2026-01-22 15:14:49.052353735 +0000 UTC m=+0.042100870 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:14:50 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:51 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:51 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:54 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:55 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:56.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:57 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:57 np0005592158 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:58.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:58 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:14:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:59 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:59 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:01 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:02 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:03 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:03 np0005592158 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:04 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:05 np0005592158 podman[249228]: 2026-01-22 15:15:05.09659957 +0000 UTC m=+0.092329307 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:15:05 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:06.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:06 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:06.538 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:15:06 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:06.539 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:15:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:06.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:06 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:07 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:08 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:09 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:10.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:10 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:11 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:12.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:12 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592158 ceph-mon[81715]: Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:13 np0005592158 ceph-mon[81715]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:14 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:14.540 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:15:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:15 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:16.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:17 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:17 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:18.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:15:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:15:19 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:20 np0005592158 podman[249256]: 2026-01-22 15:15:20.051487755 +0000 UTC m=+0.044029431 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:15:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:21 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:22.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:26.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:27 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:28 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:29 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:30.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:15:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 18K writes, 99K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1740 writes, 10K keys, 1740 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s#012Interval WAL: 1740 writes, 1740 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     74.2      1.44              0.35        65    0.022       0      0       0.0       0.0#012  L6      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.8    145.4    125.6      4.91              1.87        64    0.077    653K    35K       0.0       0.0#012 Sum      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.8    112.5    114.0      6.35              2.22       129    0.049    653K    35K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6    142.7    139.8      0.61              0.32        14    0.043    103K   5155       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    145.4    125.6      4.91              1.87        64    0.077    653K    35K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     74.3      1.43              0.35        64    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.104, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.71 GB write, 0.12 MB/s write, 0.70 GB read, 0.12 MB/s read, 6.3 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 75.56 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000719 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3956,71.55 MB,23.5368%) FilterBlock(129,1.77 MB,0.581977%) IndexBlock(129,2.23 MB,0.735037%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:15:32 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:33 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:35 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:36 np0005592158 podman[249275]: 2026-01-22 15:15:36.096543669 +0000 UTC m=+0.083210712 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:15:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:39 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:39 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:40.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:41 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:42.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:43 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:43 np0005592158 ceph-mon[81715]: Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:44.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:44 np0005592158 ceph-mon[81715]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:44 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:46.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.515 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:15:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:15:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:15:47 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:47 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:47 np0005592158 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:48.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:48 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:48.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:49 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:50.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:50 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592158 podman[249433]: 2026-01-22 15:15:51.066402606 +0000 UTC m=+0.053327271 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 10:15:51 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:15:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:54 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:55 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:56 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592158 ceph-mon[81715]: Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:58.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:15:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:59 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:01 np0005592158 ceph-mon[81715]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:01 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:02 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:02.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:03 np0005592158 ceph-mon[81715]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:04 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:04.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:04.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:06 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:07 np0005592158 podman[249504]: 2026-01-22 15:16:07.117059412 +0000 UTC m=+0.097708421 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:16:07 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:07 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:08.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:09 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:10 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:10.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:11 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:12.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:13 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:13 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:14 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:15 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:17 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:20 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:20.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:22 np0005592158 podman[249532]: 2026-01-22 15:16:22.081573855 +0000 UTC m=+0.059653481 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:16:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:22.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:23 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:23 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:24 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:24.411 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:16:24 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:24.412 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:16:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:25 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:26.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592158 ceph-mon[81715]: Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:28.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:29 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:29 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:29.415 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:16:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:30.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:31 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:33 np0005592158 ceph-mon[81715]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:34 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:35 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:36 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:37 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592158 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:37 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:38 np0005592158 podman[249552]: 2026-01-22 15:16:38.122522889 +0000 UTC m=+0.099543100 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:16:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:39 np0005592158 ceph-mon[81715]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:16:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:39.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:40 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:40 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:40.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:42 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:43.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:43 np0005592158 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:43 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.719477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005719596, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1826, "num_deletes": 446, "total_data_size": 3291801, "memory_usage": 3333600, "flush_reason": "Manual Compaction"}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005732442, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 2139298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98708, "largest_seqno": 100529, "table_properties": {"data_size": 2132203, "index_size": 3524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22351, "raw_average_key_size": 22, "raw_value_size": 2115184, "raw_average_value_size": 2149, "num_data_blocks": 153, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 446, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094887, "oldest_key_time": 1769094887, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 12936 microseconds, and 5562 cpu microseconds.
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.732479) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 2139298 bytes OK
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.732495) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733694) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733708) EVENT_LOG_v1 {"time_micros": 1769095005733703, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.733748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3282437, prev total WAL file size 3282437, number of live WAL files 2.
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(2089KB)], [204(9894KB)]
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005734572, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 12271557, "oldest_snapshot_seqno": -1}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 14084 keys, 10401332 bytes, temperature: kUnknown
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005786424, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 10401332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10326854, "index_size": 38141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35269, "raw_key_size": 388185, "raw_average_key_size": 27, "raw_value_size": 10089523, "raw_average_value_size": 716, "num_data_blocks": 1367, "num_entries": 14084, "num_filter_entries": 14084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.786684) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 10401332 bytes
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.787849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 200.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 14989, records dropped: 905 output_compression: NoCompression
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.787864) EVENT_LOG_v1 {"time_micros": 1769095005787857, "job": 132, "event": "compaction_finished", "compaction_time_micros": 51918, "compaction_time_cpu_micros": 26482, "output_level": 6, "num_output_files": 1, "total_output_size": 10401332, "num_input_records": 14989, "num_output_records": 14084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005788395, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095005790306, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.734448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:45 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:16:45.790362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.516 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:16:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:16:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:16:47 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:48.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:48 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592158 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:48 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:49 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:53 np0005592158 podman[249578]: 2026-01-22 15:16:53.101815318 +0000 UTC m=+0.090294162 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 22 10:16:53 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:54 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:56 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:56 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:57 np0005592158 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:57 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:16:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:58 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:16:58 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:16:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:59 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:00 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 10:17:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:00 np0005592158 ceph-mon[81715]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:01 np0005592158 podman[249769]: 2026-01-22 15:17:01.191813815 +0000 UTC m=+0.055299844 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:17:01 np0005592158 podman[249769]: 2026-01-22 15:17:01.287023168 +0000 UTC m=+0.150509167 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:17:02 np0005592158 ceph-mon[81715]: 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 10:17:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:17:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:05 np0005592158 ceph-mon[81715]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:06.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:07.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:07 np0005592158 ceph-mon[81715]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592158 ceph-mon[81715]: Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:08 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:08.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:09 np0005592158 podman[250021]: 2026-01-22 15:17:09.094768815 +0000 UTC m=+0.086128331 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:17:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:09 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:11.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:11 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:12.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:14 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:14 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:15.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:15 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:17:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:17:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:19 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:20.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:22.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:24 np0005592158 podman[250098]: 2026-01-22 15:17:24.057292564 +0000 UTC m=+0.049259641 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:17:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:24 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:25 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.022 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:17:25 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.023 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:17:25 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:25.023 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:17:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:25.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:26 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592158 ceph-mon[81715]: Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:34.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:35 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:36.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592158 ceph-mon[81715]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592158 ceph-mon[81715]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:17:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:38 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:40 np0005592158 ceph-mon[81715]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:40 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:40 np0005592158 podman[250117]: 2026-01-22 15:17:40.135119868 +0000 UTC m=+0.125452564 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:17:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:43.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:43 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:43 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:44 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:45 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:45 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:17:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:17:47.517 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:17:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:49.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:49 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:49 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:50 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:52.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:53.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:53 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:55 np0005592158 podman[250143]: 2026-01-22 15:17:55.057812738 +0000 UTC m=+0.052252751 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:17:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:55 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:56.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:57.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:57 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:58 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:58.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:17:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:59 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:59 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:00.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:01 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:02 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:03 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:03 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:04 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:04 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:06 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:06 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:07.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:07 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:08 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:09 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:10 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:11 np0005592158 podman[250162]: 2026-01-22 15:18:11.126044545 +0000 UTC m=+0.111770069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:18:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:12 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:13 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:13 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:13.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:14 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:18:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:18:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:15 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:17 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:17 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:18 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:18.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:19.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:19 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:20 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:20 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:20.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:22.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:23.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:23 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:25 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:25 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:26 np0005592158 podman[250369]: 2026-01-22 15:18:26.050393091 +0000 UTC m=+0.045880232 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:18:26 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:27 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:28.013 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:18:28 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:28.014 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:18:28 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:29 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:30 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:31.017 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:18:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:31.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:32 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:33 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:33 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:33.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:34 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:35 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:35.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:36 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:37 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:38 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:38 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:39 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:42 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:42 np0005592158 podman[250389]: 2026-01-22 15:18:42.135699196 +0000 UTC m=+0.127750978 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:18:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:43 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:43 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:44 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:45.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:45 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:47.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:47.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:47 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:18:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:18:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.252269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128252344, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 459, "total_data_size": 3514065, "memory_usage": 3584952, "flush_reason": "Manual Compaction"}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128271992, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 2286898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100534, "largest_seqno": 102414, "table_properties": {"data_size": 2279320, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23062, "raw_average_key_size": 22, "raw_value_size": 2261434, "raw_average_value_size": 2217, "num_data_blocks": 171, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 459, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095006, "oldest_key_time": 1769095006, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 19795 microseconds, and 8514 cpu microseconds.
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.272075) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 2286898 bytes OK
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.272096) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273409) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273429) EVENT_LOG_v1 {"time_micros": 1769095128273422, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.273451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3504371, prev total WAL file size 3507902, number of live WAL files 2.
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.274692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373833' seq:72057594037927935, type:22 .. '6C6F676D0035303335' seq:0, type:0; will stop at (end)
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(2233KB)], [207(10157KB)]
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128274726, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 12688230, "oldest_snapshot_seqno": -1}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 14171 keys, 12485492 bytes, temperature: kUnknown
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128343532, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12485492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407855, "index_size": 41108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390186, "raw_average_key_size": 27, "raw_value_size": 12166289, "raw_average_value_size": 858, "num_data_blocks": 1492, "num_entries": 14171, "num_filter_entries": 14171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.343819) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12485492 bytes
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.345467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.2 rd, 181.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 15104, records dropped: 933 output_compression: NoCompression
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.345488) EVENT_LOG_v1 {"time_micros": 1769095128345478, "job": 134, "event": "compaction_finished", "compaction_time_micros": 68882, "compaction_time_cpu_micros": 38707, "output_level": 6, "num_output_files": 1, "total_output_size": 12485492, "num_input_records": 15104, "num_output_records": 14171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128346131, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128348341, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.274578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:18:48.348398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:50 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:51.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:51 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:52 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:54 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:54 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:55.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:55 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592158 podman[250415]: 2026-01-22 15:18:57.070765819 +0000 UTC m=+0.055222753 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 10:18:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:57 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:58 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:18:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:59 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:02 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:03.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:03 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:03 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:04 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:04 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:05 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:06 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:09 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:09 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:10 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:11 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:12 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:13 np0005592158 podman[250434]: 2026-01-22 15:19:13.108673052 +0000 UTC m=+0.105322095 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 10:19:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:13.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:13 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:13 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:14 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:15.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:15 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:16 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:18 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:18 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:19 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:19.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:20 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:21 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:19:22 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:23.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:23.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:25 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:25.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:25.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:26 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:27 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:28 np0005592158 podman[250712]: 2026-01-22 15:19:28.068723755 +0000 UTC m=+0.052820438 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:19:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:29 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:29 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:30 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:31 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:33 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:33 np0005592158 ceph-mon[81715]: Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:34 np0005592158 ceph-mon[81715]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:35.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:35 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:36 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:37 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:38 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:38 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:39.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:40 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:41.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:43 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:44 np0005592158 podman[250781]: 2026-01-22 15:19:44.108435117 +0000 UTC m=+0.091211657 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:19:44 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:45 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:47 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:47 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:47.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.518 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:19:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:19:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:19:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:48 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:49.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:49 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:51.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:51 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:52 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:53.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:53.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:53 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:53 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.223825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195223858, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1177, "num_deletes": 362, "total_data_size": 1969730, "memory_usage": 1995712, "flush_reason": "Manual Compaction"}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195236366, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1293702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102419, "largest_seqno": 103591, "table_properties": {"data_size": 1288682, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15155, "raw_average_key_size": 22, "raw_value_size": 1277050, "raw_average_value_size": 1864, "num_data_blocks": 95, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 362, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095128, "oldest_key_time": 1769095128, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 12635 microseconds, and 4234 cpu microseconds.
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.236450) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1293702 bytes OK
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.236483) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237943) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237971) EVENT_LOG_v1 {"time_micros": 1769095195237961, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.237996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1963421, prev total WAL file size 1963421, number of live WAL files 2.
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.239271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1263KB)], [210(11MB)]
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195239329, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13779194, "oldest_snapshot_seqno": -1}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 14117 keys, 12015866 bytes, temperature: kUnknown
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195326754, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 12015866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938718, "index_size": 40747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 389333, "raw_average_key_size": 27, "raw_value_size": 11698197, "raw_average_value_size": 828, "num_data_blocks": 1475, "num_entries": 14117, "num_filter_entries": 14117, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.327070) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12015866 bytes
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.328648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 137.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.9) write-amplify(9.3) OK, records in: 14856, records dropped: 739 output_compression: NoCompression
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.328694) EVENT_LOG_v1 {"time_micros": 1769095195328683, "job": 136, "event": "compaction_finished", "compaction_time_micros": 87513, "compaction_time_cpu_micros": 57746, "output_level": 6, "num_output_files": 1, "total_output_size": 12015866, "num_input_records": 14856, "num_output_records": 14117, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195329152, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195332292, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.239168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:55.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:55 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:57.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:57.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:58 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:59 np0005592158 podman[250808]: 2026-01-22 15:19:59.070800322 +0000 UTC m=+0.052898350 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:19:59 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:59 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:19:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:00 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:01 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:01 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:02 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:03.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:03.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:03 np0005592158 ceph-mon[81715]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:03 np0005592158 ceph-mon[81715]: Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:05 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:06 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:07.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:07.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:07 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592158 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:09.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:09.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:10 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:11.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:13.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:13 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:13 np0005592158 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:15 np0005592158 podman[250828]: 2026-01-22 15:20:15.08912143 +0000 UTC m=+0.075058814 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:20:15 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:15.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:16 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:17.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:17.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:17 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:19 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:19 np0005592158 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:19.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:21.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:21 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:23.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:24 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:25.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:25 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592158 ceph-mon[81715]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:27.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:27 np0005592158 ceph-mon[81715]: Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:27 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:29 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:30 np0005592158 podman[250854]: 2026-01-22 15:20:30.087141729 +0000 UTC m=+0.061055048 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:20:30 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:31.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:31 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:33.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:33 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:20:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:20:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:35.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:36 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:37 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:37.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:38 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:39 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:41.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:41 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:46 np0005592158 podman[251006]: 2026-01-22 15:20:46.091686618 +0000 UTC m=+0.081245189 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 10:20:46 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:20:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:20:47.519 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:49 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:52 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:52 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:53.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:54 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:54 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:55 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:56 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:56 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:57.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:58 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:58 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:20:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:59 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:20:59 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:01 np0005592158 podman[251082]: 2026-01-22 15:21:01.072302952 +0000 UTC m=+0.055499450 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 10:21:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:01 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:01.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:02 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:02 np0005592158 ceph-mon[81715]: Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:03 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:04 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:05.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:05 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:06 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:07 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592158 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:08 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:10 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:11 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:11.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:21:12 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:13.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:13 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592158 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:13 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:14 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:15.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:16 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592158 podman[251102]: 2026-01-22 15:21:17.137600083 +0000 UTC m=+0.122051564 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 10:21:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:17.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:17 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:17.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:18 np0005592158 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:18 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:19.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:19.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:20 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:21.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:21 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592158 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:23.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:23 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:24 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:25 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:27 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:27.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:28 np0005592158 ceph-mon[81715]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:28 np0005592158 ceph-mon[81715]: Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:28 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:29 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:30 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:30 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:32 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:32 np0005592158 podman[251130]: 2026-01-22 15:21:32.075133963 +0000 UTC m=+0.069925697 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:21:33 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:33 np0005592158 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:33.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:33.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:34 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:35 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:35.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:36 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:37 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:38 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:38 np0005592158 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:39.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:39 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:40 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:40 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:41 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:43 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:43 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:44 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:45 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:45.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:46 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:47.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:21:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:21:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:21:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:47 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:48 np0005592158 podman[251150]: 2026-01-22 15:21:48.134431825 +0000 UTC m=+0.130256919 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:21:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:48 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:49.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:49 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:50 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:51.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:52 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:53 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:53 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:53.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:53 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:54 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:55 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:21:55.603 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:21:55 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:21:55.604 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:21:56 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:56 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:21:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:57 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:21:57 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:21:57 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:58 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:58 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:58 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:21:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:00 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:00 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:22:00.606 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:22:01 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:02 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:03 np0005592158 podman[251307]: 2026-01-22 15:22:03.051589025 +0000 UTC m=+0.047270762 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 22 10:22:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:03 np0005592158 ceph-mon[81715]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:03 np0005592158 ceph-mon[81715]: Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:03 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:04 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:06 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:07 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:08 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:08 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:08 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:09.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:09 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:11 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:11.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:12 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:13.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:14 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:14 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 10:22:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 10:22:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:16 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:17 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:18 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:19 np0005592158 podman[251378]: 2026-01-22 15:22:19.088092152 +0000 UTC m=+0.077813329 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 10:22:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:19.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:19 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:21 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:22 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:23.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:23 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:23 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:23 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 10:22:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:24 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:25.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:25 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:27 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:27.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:28 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:28 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:29 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:29 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:31 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:32 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592158 ceph-mon[81715]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:33.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:33 np0005592158 ceph-mon[81715]: Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:33 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:34 np0005592158 podman[251405]: 2026-01-22 15:22:34.053655703 +0000 UTC m=+0.048577547 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 10:22:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:34 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:35 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:36 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:38 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:38 np0005592158 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:39.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:39 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:39.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:40 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:41 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592158 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:43 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:44 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:45.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:45 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:46 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:47.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.521 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:22:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:22:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:22:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:47 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592158 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:48 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:49 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:50 np0005592158 podman[251424]: 2026-01-22 15:22:50.087956453 +0000 UTC m=+0.073518443 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:22:51 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:52 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592158 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:53.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:53.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:54 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:55 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:55.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:55.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:56 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:57 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:57.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:58 np0005592158 ceph-mon[81715]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:58 np0005592158 ceph-mon[81715]: Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:59 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:22:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:22:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.598544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380598846, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 2802, "num_deletes": 569, "total_data_size": 5294219, "memory_usage": 5379024, "flush_reason": "Manual Compaction"}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380622698, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 3454007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103596, "largest_seqno": 106393, "table_properties": {"data_size": 3443431, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 33366, "raw_average_key_size": 23, "raw_value_size": 3417977, "raw_average_value_size": 2380, "num_data_blocks": 250, "num_entries": 1436, "num_filter_entries": 1436, "num_deletions": 569, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095195, "oldest_key_time": 1769095195, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 23959 microseconds, and 8128 cpu microseconds.
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.622749) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 3454007 bytes OK
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.622770) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625195) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625212) EVENT_LOG_v1 {"time_micros": 1769095380625206, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.625230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 5280288, prev total WAL file size 5280288, number of live WAL files 2.
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.626613) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(3373KB)], [213(11MB)]
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380626783, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 15469873, "oldest_snapshot_seqno": -1}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 14400 keys, 13609847 bytes, temperature: kUnknown
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380738593, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 13609847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13529146, "index_size": 43596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 394685, "raw_average_key_size": 27, "raw_value_size": 13282098, "raw_average_value_size": 922, "num_data_blocks": 1597, "num_entries": 14400, "num_filter_entries": 14400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.738915) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13609847 bytes
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.2 rd, 121.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.5 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15553, records dropped: 1153 output_compression: NoCompression
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740335) EVENT_LOG_v1 {"time_micros": 1769095380740326, "job": 138, "event": "compaction_finished", "compaction_time_micros": 111956, "compaction_time_cpu_micros": 64747, "output_level": 6, "num_output_files": 1, "total_output_size": 13609847, "num_input_records": 15553, "num_output_records": 14400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380741339, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380743711, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.626461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:00.743987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:01 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:02 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:04 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:05 np0005592158 podman[251451]: 2026-01-22 15:23:05.05256525 +0000 UTC m=+0.047445487 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:05 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:06 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:07.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:08 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:08 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:23:08 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:08 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:23:09 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:09.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:10 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:11.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:12 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:13.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:13 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:13 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:14 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:15.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:16 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:17.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.730237) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397730286, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 512, "num_deletes": 278, "total_data_size": 528974, "memory_usage": 538224, "flush_reason": "Manual Compaction"}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397734104, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 315797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106398, "largest_seqno": 106905, "table_properties": {"data_size": 313176, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8080, "raw_average_key_size": 21, "raw_value_size": 307427, "raw_average_value_size": 819, "num_data_blocks": 25, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 278, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095381, "oldest_key_time": 1769095381, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 3886 microseconds, and 1850 cpu microseconds.
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734133) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 315797 bytes OK
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.734146) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735178) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735188) EVENT_LOG_v1 {"time_micros": 1769095397735184, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735202) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 525772, prev total WAL file size 525772, number of live WAL files 2.
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(308KB)], [216(12MB)]
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397735536, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 13925644, "oldest_snapshot_seqno": -1}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 14209 keys, 10036727 bytes, temperature: kUnknown
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397797432, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 10036727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961922, "index_size": 38148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35589, "raw_key_size": 390795, "raw_average_key_size": 27, "raw_value_size": 9722924, "raw_average_value_size": 684, "num_data_blocks": 1369, "num_entries": 14209, "num_filter_entries": 14209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.797764) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10036727 bytes
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.799110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 161.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.9) write-amplify(31.8) OK, records in: 14775, records dropped: 566 output_compression: NoCompression
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.799162) EVENT_LOG_v1 {"time_micros": 1769095397799121, "job": 140, "event": "compaction_finished", "compaction_time_micros": 61976, "compaction_time_cpu_micros": 26962, "output_level": 6, "num_output_files": 1, "total_output_size": 10036727, "num_input_records": 14775, "num_output_records": 14209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397799370, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397802155, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:17.802428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:23:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:23:19 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:19.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:20 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592158 podman[251651]: 2026-01-22 15:23:21.143140104 +0000 UTC m=+0.118165113 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:23:21 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:22 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:23.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:23.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:24 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:25 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:25.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:25.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:26 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:27.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:29 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:29.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:30 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:32 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:32 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:33 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:35 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:36 np0005592158 podman[251679]: 2026-01-22 15:23:36.055630411 +0000 UTC m=+0.048265089 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:23:36 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:37 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:38 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:38 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:40 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:40 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:41 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:43.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:43 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:44 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:44 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:45 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:46 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.522 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:23:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:23:47.523 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:23:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:47.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:47 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:48 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:49.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:50 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:52 np0005592158 podman[251698]: 2026-01-22 15:23:52.085716923 +0000 UTC m=+0.077643405 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:23:52 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:54 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:55 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:55.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:56 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:57.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #220. Immutable memtables: 0.
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.776176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 220
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437776269, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 325, "total_data_size": 1094797, "memory_usage": 1111488, "flush_reason": "Manual Compaction"}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #221: started
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437782162, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 221, "file_size": 717997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106910, "largest_seqno": 107717, "table_properties": {"data_size": 714385, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10658, "raw_average_key_size": 20, "raw_value_size": 706207, "raw_average_value_size": 1368, "num_data_blocks": 53, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 325, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095398, "oldest_key_time": 1769095398, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 5982 microseconds, and 2558 cpu microseconds.
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.782205) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #221: 717997 bytes OK
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.782223) [db/memtable_list.cc:519] [default] Level-0 commit table #221 started
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783376) [db/memtable_list.cc:722] [default] Level-0 commit table #221: memtable #1 done
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783390) EVENT_LOG_v1 {"time_micros": 1769095437783385, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1090180, prev total WAL file size 1090180, number of live WAL files 2.
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000217.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035303334' seq:72057594037927935, type:22 .. '6C6F676D0035323837' seq:0, type:0; will stop at (end)
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [221(701KB)], [219(9801KB)]
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437783973, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [221], "files_L6": [219], "score": -1, "input_data_size": 10754724, "oldest_snapshot_seqno": -1}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #222: 14066 keys, 10584317 bytes, temperature: kUnknown
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437842857, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 222, "file_size": 10584317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509560, "index_size": 38484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35205, "raw_key_size": 388480, "raw_average_key_size": 27, "raw_value_size": 10272136, "raw_average_value_size": 730, "num_data_blocks": 1380, "num_entries": 14066, "num_filter_entries": 14066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 222, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.843163) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 10584317 bytes
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.845085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.3 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(29.7) write-amplify(14.7) OK, records in: 14725, records dropped: 659 output_compression: NoCompression
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.845104) EVENT_LOG_v1 {"time_micros": 1769095437845094, "job": 142, "event": "compaction_finished", "compaction_time_micros": 58979, "compaction_time_cpu_micros": 25424, "output_level": 6, "num_output_files": 1, "total_output_size": 10584317, "num_input_records": 14725, "num_output_records": 14066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437845340, "job": 142, "event": "table_file_deletion", "file_number": 221}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000219.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437846993, "job": 142, "event": "table_file_deletion", "file_number": 219}
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:23:57.847036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:23:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:59.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:59 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:02 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:02 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:04 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:04 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:05 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:06 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592158 podman[251724]: 2026-01-22 15:24:07.060640075 +0000 UTC m=+0.048862453 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:24:07 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:08 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:08 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:09 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:10 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:12 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:13.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:13.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:13 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592158 ceph-mon[81715]: Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:15 np0005592158 ceph-mon[81715]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:16 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:24:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:24:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:17 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:24:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:24:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:24:19 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:20 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:22 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592158 podman[251874]: 2026-01-22 15:24:23.116247249 +0000 UTC m=+0.105994459 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:24:23 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592158 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:24 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:25 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:25.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:26 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:27.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:27.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:27 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592158 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:29 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:29.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:30 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:31.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:32 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:33 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:33 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:24:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.5 total, 600.0 interval#012Cumulative writes: 16K writes, 48K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5677 syncs, 2.87 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 984 writes, 1511 keys, 984 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 984 writes, 466 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:24:35 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:36 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:37 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592158 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:38 np0005592158 podman[251951]: 2026-01-22 15:24:38.058517199 +0000 UTC m=+0.046509190 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:24:38 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:38 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:39.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:39 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:42 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:43 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:43 np0005592158 ceph-mon[81715]: Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:44 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:45 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:46 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.524 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.524 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:24:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:24:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:24:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:48 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:48 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:49 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:50 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:51.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:51.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:51 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:53.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:53.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:54 np0005592158 podman[251971]: 2026-01-22 15:24:54.141608355 +0000 UTC m=+0.130038258 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:24:54 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:55 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592158 ceph-mon[81715]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:55.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:55.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:56 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:57 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:57.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:58 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:58 np0005592158 ceph-mon[81715]: Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:59.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:59 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:24:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:00 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:01.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:01 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592158 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:03.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:04 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:05 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:05.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:05.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:06 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:07.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:07.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:08 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:08 np0005592158 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:09 np0005592158 podman[251998]: 2026-01-22 15:25:09.054390428 +0000 UTC m=+0.047027363 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:25:09 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:09.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:10 np0005592158 ceph-mon[81715]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:11.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:11.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:12 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592158 ceph-mon[81715]: Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:13.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:14 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:15 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:15.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:16 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:17.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:18 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:18 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:19 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:19.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:19.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:20 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:21.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:21.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:22 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:23.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:23.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:24 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:25:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:24 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:25:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:25 np0005592158 podman[252149]: 2026-01-22 15:25:25.081418008 +0000 UTC m=+0.075200565 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:25:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:25.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:25 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:26 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:27.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:27 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:28 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:28 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:29.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:29 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:31 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:31.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:25:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 20K writes, 109K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1696 writes, 9789 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 16.34 MB, 0.03 MB/s#012Interval WAL: 1696 writes, 1696 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.8      1.52              0.38        71    0.021       0      0       0.0       0.0#012  L6      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    147.5    127.6      5.35              2.11        70    0.076    743K    40K       0.0       0.0#012 Sum      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.9    114.9    116.4      6.87              2.49       141    0.049    743K    40K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    144.6    145.4      0.52              0.27        12    0.043     90K   4955       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    147.5    127.6      5.35              2.11        70    0.076    743K    40K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.9      1.51              0.38        70    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.114, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.78 GB write, 0.12 MB/s write, 0.77 GB read, 0.12 MB/s read, 6.9 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 83.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000534 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4362,78.95 MB,25.9711%) FilterBlock(141,2.02 MB,0.663491%) IndexBlock(141,2.50 MB,0.823397%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:25:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:32 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:33.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:33.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:34 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:35 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:36 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:37.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:38 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:38 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:39 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:39.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:40 np0005592158 podman[252225]: 2026-01-22 15:25:40.049925338 +0000 UTC m=+0.046080198 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:25:40 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:42 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:42 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:44 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:45 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:47 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:25:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:25:47.525 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:25:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:48 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:49 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:49.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:49.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:50 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:50 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:51.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:52 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:53.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:53 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #223. Immutable memtables: 0.
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.063910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 223
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554063937, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 1923, "num_deletes": 449, "total_data_size": 3295031, "memory_usage": 3359200, "flush_reason": "Manual Compaction"}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #224: started
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554078163, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 224, "file_size": 2150401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 107722, "largest_seqno": 109640, "table_properties": {"data_size": 2143190, "index_size": 3576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23099, "raw_average_key_size": 22, "raw_value_size": 2125694, "raw_average_value_size": 2088, "num_data_blocks": 155, "num_entries": 1018, "num_filter_entries": 1018, "num_deletions": 449, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095438, "oldest_key_time": 1769095438, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 14306 microseconds, and 4747 cpu microseconds.
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.078214) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #224: 2150401 bytes OK
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.078230) [db/memtable_list.cc:519] [default] Level-0 commit table #224 started
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079736) [db/memtable_list.cc:722] [default] Level-0 commit table #224: memtable #1 done
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079747) EVENT_LOG_v1 {"time_micros": 1769095554079744, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.079763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 3285297, prev total WAL file size 3285297, number of live WAL files 2.
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000220.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.080444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [224(2100KB)], [222(10MB)]
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554080468, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [224], "files_L6": [222], "score": -1, "input_data_size": 12734718, "oldest_snapshot_seqno": -1}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #225: 14173 keys, 10835853 bytes, temperature: kUnknown
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554155576, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 225, "file_size": 10835853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10760225, "index_size": 39099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390633, "raw_average_key_size": 27, "raw_value_size": 10520785, "raw_average_value_size": 742, "num_data_blocks": 1404, "num_entries": 14173, "num_filter_entries": 14173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 225, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.156132) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 10835853 bytes
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.157773) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 143.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 15084, records dropped: 911 output_compression: NoCompression
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.157793) EVENT_LOG_v1 {"time_micros": 1769095554157783, "job": 144, "event": "compaction_finished", "compaction_time_micros": 75352, "compaction_time_cpu_micros": 27297, "output_level": 6, "num_output_files": 1, "total_output_size": 10835853, "num_input_records": 15084, "num_output_records": 14173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554158901, "job": 144, "event": "table_file_deletion", "file_number": 224}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000222.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554161268, "job": 144, "event": "table_file_deletion", "file_number": 222}
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.080409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:25:54.161392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:55 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:55.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:55.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:56 np0005592158 podman[252245]: 2026-01-22 15:25:56.09145759 +0000 UTC m=+0.082214745 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:25:56 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:58 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:58 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:59 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:59.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:25:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:00 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:01.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:01 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:03 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:03.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:03.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:04 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:04 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:05 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:05.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:05.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:06 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:07.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:07.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:08 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:08 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:09 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:09.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:10 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592158 podman[252271]: 2026-01-22 15:26:11.0688847 +0000 UTC m=+0.056943702 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 10:26:11 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:11.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:12 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:13.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:13.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:14 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:16 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:17.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:18 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:20 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:21.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:21.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:23 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:23.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:24 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:24 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:24 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:25 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:25.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:25.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:26 np0005592158 podman[252292]: 2026-01-22 15:26:26.37573038 +0000 UTC m=+0.172599020 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:26:26 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:27.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:28 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:28 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:28 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:29.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:29.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:30 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:31.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:31.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:31 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:26:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:34 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:35 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:36 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:37.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:38 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:38 np0005592158 ceph-mon[81715]: Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:39 np0005592158 ceph-mon[81715]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:39.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:40 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:40 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:41.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:42 np0005592158 podman[252501]: 2026-01-22 15:26:42.111375488 +0000 UTC m=+0.087177449 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:26:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:43 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:43 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:44 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:45 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:46 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:26:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:26:47.526 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:26:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:47 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:49 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:50 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:52 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:54 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:55 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:55.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:56 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:57 np0005592158 podman[252518]: 2026-01-22 15:26:57.097688968 +0000 UTC m=+0.090638082 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:26:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:57.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:58 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:58 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:59 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:26:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:59.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:00 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:01.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:01.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:02 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:04 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:05 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:06 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:08 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:08 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:09 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:09.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:10 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:12 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:13 np0005592158 podman[252544]: 2026-01-22 15:27:13.063455981 +0000 UTC m=+0.053191059 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:27:13 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:13 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:14 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:15 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:16 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:17 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:18 np0005592158 ceph-mon[81715]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:18 np0005592158 ceph-mon[81715]: Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:27:19 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:27:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:27:19 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:27:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:19.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:20 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:22 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592158 ceph-mon[81715]: Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:24 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:25.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:28 np0005592158 podman[252563]: 2026-01-22 15:27:28.104875431 +0000 UTC m=+0.091419124 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:27:28 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592158 ceph-mon[81715]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:28 np0005592158 ceph-mon[81715]: Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:29 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:30 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:32 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:33.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:34 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:35 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:35.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:36 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:37.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:38 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:38 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:39 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:39.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:40 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:40 np0005592158 podman[252761]: 2026-01-22 15:27:40.430610485 +0000 UTC m=+0.058971696 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:27:40 np0005592158 podman[252761]: 2026-01-22 15:27:40.587157799 +0000 UTC m=+0.215518960 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:27:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:41 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:41.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:42 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:42 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:42 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:43.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:43 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:43 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:27:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:27:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:44 np0005592158 podman[253014]: 2026-01-22 15:27:44.071527939 +0000 UTC m=+0.051625527 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 22 10:27:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:44 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:45 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:47 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.527 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:27:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:27:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:27:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:48.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:48 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:48 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:49 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:49.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:50.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:50 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:51 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:52.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:52 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:53.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:54 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:55 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:55.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:56 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:57.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:58.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:58 np0005592158 ceph-mon[81715]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:58 np0005592158 ceph-mon[81715]: Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:59 np0005592158 podman[253081]: 2026-01-22 15:27:59.126800405 +0000 UTC m=+0.107455069 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 10:27:59 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:27:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:27:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:00.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:00 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:02.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:02 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:03.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:04.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:04 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:05 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:06.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:06 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:08 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:09 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:10 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:12 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:14 np0005592158 podman[253107]: 2026-01-22 15:28:14.893470202 +0000 UTC m=+0.076617744 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:28:15 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:16 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:16 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:18 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:21 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:22 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:23 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:24 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:25 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:25.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:26.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:27 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:28 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:28 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:29 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:29.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:30.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:30 np0005592158 podman[253126]: 2026-01-22 15:28:30.107572215 +0000 UTC m=+0.093671916 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:28:30 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:32 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:34 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:34 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:35 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:36 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:36 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:38 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:38 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:39 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:40 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:40 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #226. Immutable memtables: 0.
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.498586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 226
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723498733, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 2684, "num_deletes": 542, "total_data_size": 4857881, "memory_usage": 4935864, "flush_reason": "Manual Compaction"}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #227: started
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723547929, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 227, "file_size": 3165791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 109645, "largest_seqno": 112324, "table_properties": {"data_size": 3155857, "index_size": 5403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31224, "raw_average_key_size": 22, "raw_value_size": 3131924, "raw_average_value_size": 2304, "num_data_blocks": 228, "num_entries": 1359, "num_filter_entries": 1359, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095554, "oldest_key_time": 1769095554, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 49580 microseconds, and 7850 cpu microseconds.
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.548269) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #227: 3165791 bytes OK
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.548394) [db/memtable_list.cc:519] [default] Level-0 commit table #227 started
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549796) [db/memtable_list.cc:722] [default] Level-0 commit table #227: memtable #1 done
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549808) EVENT_LOG_v1 {"time_micros": 1769095723549804, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.549824) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 4844635, prev total WAL file size 4844635, number of live WAL files 2.
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000223.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.551440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035323836' seq:72057594037927935, type:22 .. '6C6F676D0035353339' seq:0, type:0; will stop at (end)
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [227(3091KB)], [225(10MB)]
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723551520, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [227], "files_L6": [225], "score": -1, "input_data_size": 14001644, "oldest_snapshot_seqno": -1}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #228: 14433 keys, 13750079 bytes, temperature: kUnknown
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723632643, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 228, "file_size": 13750079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13669738, "index_size": 43172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 396006, "raw_average_key_size": 27, "raw_value_size": 13422929, "raw_average_value_size": 930, "num_data_blocks": 1574, "num_entries": 14433, "num_filter_entries": 14433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 228, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.633035) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 13750079 bytes
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.634309) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.3 rd, 169.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.3) OK, records in: 15532, records dropped: 1099 output_compression: NoCompression
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.634328) EVENT_LOG_v1 {"time_micros": 1769095723634318, "job": 146, "event": "compaction_finished", "compaction_time_micros": 81269, "compaction_time_cpu_micros": 36049, "output_level": 6, "num_output_files": 1, "total_output_size": 13750079, "num_input_records": 15532, "num_output_records": 14433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723635005, "job": 146, "event": "table_file_deletion", "file_number": 227}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000225.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723636936, "job": 146, "event": "table_file_deletion", "file_number": 225}
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.551316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:43.637058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:43 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:45 np0005592158 podman[253152]: 2026-01-22 15:28:45.077413451 +0000 UTC m=+0.060889978 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:28:45 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #229. Immutable memtables: 0.
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.885346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 229
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726885397, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 258, "total_data_size": 128067, "memory_usage": 135096, "flush_reason": "Manual Compaction"}
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #230: started
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726888071, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 230, "file_size": 83592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112329, "largest_seqno": 112632, "table_properties": {"data_size": 81614, "index_size": 141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5312, "raw_average_key_size": 18, "raw_value_size": 77703, "raw_average_value_size": 274, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095723, "oldest_key_time": 1769095723, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 2771 microseconds, and 1012 cpu microseconds.
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.888120) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #230: 83592 bytes OK
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.888142) [db/memtable_list.cc:519] [default] Level-0 commit table #230 started
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889185) [db/memtable_list.cc:722] [default] Level-0 commit table #230: memtable #1 done
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889198) EVENT_LOG_v1 {"time_micros": 1769095726889194, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 125797, prev total WAL file size 125797, number of live WAL files 2.
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000226.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [230(81KB)], [228(13MB)]
Jan 22 10:28:46 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726889885, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [230], "files_L6": [228], "score": -1, "input_data_size": 13833671, "oldest_snapshot_seqno": -1}
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #231: 14193 keys, 12058972 bytes, temperature: kUnknown
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727448084, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 231, "file_size": 12058972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11981514, "index_size": 40865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 391693, "raw_average_key_size": 27, "raw_value_size": 11740070, "raw_average_value_size": 827, "num_data_blocks": 1472, "num_entries": 14193, "num_filter_entries": 14193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 231, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:28:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:28:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.448495) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 12058972 bytes
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.773345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.8 rd, 21.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(309.8) write-amplify(144.3) OK, records in: 14716, records dropped: 523 output_compression: NoCompression
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.773412) EVENT_LOG_v1 {"time_micros": 1769095727773388, "job": 148, "event": "compaction_finished", "compaction_time_micros": 558287, "compaction_time_cpu_micros": 41618, "output_level": 6, "num_output_files": 1, "total_output_size": 12058972, "num_input_records": 14716, "num_output_records": 14193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727773799, "job": 148, "event": "table_file_deletion", "file_number": 230}
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000228.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727778406, "job": 148, "event": "table_file_deletion", "file_number": 228}
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:46.889506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:28:47.778569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:48.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:49 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:49 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:49 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:50 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:28:54 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:28:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:55 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:55 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:56 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:56 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:58.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:59 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:59 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:28:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:59.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:00 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:01 np0005592158 podman[253303]: 2026-01-22 15:29:01.14871736 +0000 UTC m=+0.134839798 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:29:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:02 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:02 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:04 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:04 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:05 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:07 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:08 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:08 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:09 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:11 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:12 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:14 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:14 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:15 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:16 np0005592158 podman[253380]: 2026-01-22 15:29:16.080757954 +0000 UTC m=+0.066245064 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Jan 22 10:29:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:17 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:17 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:18 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:20 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:22 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:23 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:23 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:24 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:25 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:26.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:26 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:30 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:31 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:32 np0005592158 podman[253401]: 2026-01-22 15:29:32.117596709 +0000 UTC m=+0.094254931 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:29:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:33 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:34 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:34 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:35 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:36 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:36 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:40 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:41 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:42.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:42 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:43 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:44.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:44.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:44 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:45 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:46.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:47 np0005592158 podman[253428]: 2026-01-22 15:29:47.053405874 +0000 UTC m=+0.048352439 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:29:47 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.528 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:29:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:29:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:29:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:48.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:48 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:48 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:48 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:50.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:50.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:50 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:52.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:52 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:54.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:54 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:55 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:56.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:56 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:56 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:29:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:58 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:58 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:59 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:00.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:02 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:03 np0005592158 podman[253472]: 2026-01-22 15:30:03.024313288 +0000 UTC m=+0.186198268 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:30:03 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:04.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:04 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:30:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:05 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:30:05 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:07 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:09 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:09 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:10 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:11 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:12 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:13 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:14.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:14 np0005592158 ceph-mon[81715]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:14 np0005592158 ceph-mon[81715]: Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:15 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:16 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:17 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:18 np0005592158 podman[253776]: 2026-01-22 15:30:18.063138809 +0000 UTC m=+0.059309286 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:30:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:18.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:18 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:18 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:19 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:20 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:21 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:22.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:23 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:24 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:24 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:25 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:26 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:28.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:28 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:28 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:29 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:30 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:31 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:32 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:34 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:34 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:34 np0005592158 podman[253795]: 2026-01-22 15:30:34.131256891 +0000 UTC m=+0.117762687 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:30:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:34.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:35 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:37 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:38.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:38 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:39 np0005592158 ceph-mon[81715]: 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:30:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:40.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:40 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:40 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:41 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:43 np0005592158 ceph-mon[81715]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:44.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:44 np0005592158 ceph-mon[81715]: Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:44 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:45 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:46 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.529 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:30:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:30:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:30:47 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:48.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:48 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592158 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:48 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:49 np0005592158 podman[253821]: 2026-01-22 15:30:49.06322714 +0000 UTC m=+0.055691637 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 10:30:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:49 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:50.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:50 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:51 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:52.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:53 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:54 np0005592158 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:54 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:55 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:56 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:58.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:30:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:58.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:58 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:58 np0005592158 ceph-mon[81715]: Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:59 np0005592158 ceph-mon[81715]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:59 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:00.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:01 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:02.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:02 np0005592158 ceph-mon[81715]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:03 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:04 np0005592158 ceph-mon[81715]: Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:04 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:05 np0005592158 podman[253841]: 2026-01-22 15:31:05.143082761 +0000 UTC m=+0.128319213 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 10:31:05 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:06.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:06 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:08.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:09 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:09 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:10.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:11 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:12.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:13 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:14.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:31:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:15 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:16.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:16 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:17 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:18.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:18.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:18 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:19 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:19 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:20 np0005592158 podman[254000]: 2026-01-22 15:31:20.071359491 +0000 UTC m=+0.056528250 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:31:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:20 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:22.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:23 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:24.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:24 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:25 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:26 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:27 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:28 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:29 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:29 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:30.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:30 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:31 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:32.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:32 np0005592158 ceph-mon[81715]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:33 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:33 np0005592158 ceph-mon[81715]: Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:34.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:34 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:35 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:36 np0005592158 podman[254071]: 2026-01-22 15:31:36.147224302 +0000 UTC m=+0.128319503 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 10:31:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:36.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:36 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:38.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #232. Immutable memtables: 0.
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.419081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 232
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898419141, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2750, "num_deletes": 543, "total_data_size": 5142616, "memory_usage": 5238040, "flush_reason": "Manual Compaction"}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #233: started
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898440636, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 233, "file_size": 2094155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112637, "largest_seqno": 115382, "table_properties": {"data_size": 2085806, "index_size": 3950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 31336, "raw_average_key_size": 23, "raw_value_size": 2063839, "raw_average_value_size": 1570, "num_data_blocks": 166, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095727, "oldest_key_time": 1769095727, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 21621 microseconds, and 10136 cpu microseconds.
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440712) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #233: 2094155 bytes OK
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440730) [db/memtable_list.cc:519] [default] Level-0 commit table #233 started
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443578) [db/memtable_list.cc:722] [default] Level-0 commit table #233: memtable #1 done
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443594) EVENT_LOG_v1 {"time_micros": 1769095898443589, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.443612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 5129027, prev total WAL file size 5137294, number of live WAL files 2.
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000229.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.445001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323538' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [233(2045KB)], [231(11MB)]
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898445034, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [233], "files_L6": [231], "score": -1, "input_data_size": 14153127, "oldest_snapshot_seqno": -1}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #234: 14484 keys, 11396640 bytes, temperature: kUnknown
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898526967, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 234, "file_size": 11396640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11319240, "index_size": 40103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 397009, "raw_average_key_size": 27, "raw_value_size": 11074711, "raw_average_value_size": 764, "num_data_blocks": 1444, "num_entries": 14484, "num_filter_entries": 14484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 234, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.527322) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 11396640 bytes
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.528719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.6 rd, 139.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 15507, records dropped: 1023 output_compression: NoCompression
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.528743) EVENT_LOG_v1 {"time_micros": 1769095898528733, "job": 150, "event": "compaction_finished", "compaction_time_micros": 82012, "compaction_time_cpu_micros": 39537, "output_level": 6, "num_output_files": 1, "total_output_size": 11396640, "num_input_records": 15507, "num_output_records": 14484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898529276, "job": 150, "event": "table_file_deletion", "file_number": 233}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000231.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898531889, "job": 150, "event": "table_file_deletion", "file_number": 231}
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.444921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:31:38.531953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:39 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:39 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:41 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:41 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:42 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:42 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:44.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:44 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:44 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:47 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.530 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.531 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:31:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:31:47.531 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:31:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:48.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:48 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:49 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:49 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:50.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:51 np0005592158 podman[254097]: 2026-01-22 15:31:51.078864433 +0000 UTC m=+0.066997943 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:31:51 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:52 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:54 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:55 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:55 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:56.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:56 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:57 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:31:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:58 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:59 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:00.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:00 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:02 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:02.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:02.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:03 np0005592158 ceph-mon[81715]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:04.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:04 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:04 np0005592158 ceph-mon[81715]: Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:04 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:05 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:06.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:07 np0005592158 podman[254116]: 2026-01-22 15:32:07.110514167 +0000 UTC m=+0.091411443 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 22 10:32:07 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:08.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:08.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:09 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:09 np0005592158 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:10.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:10 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:12.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:12.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:12 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:14.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:14 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592158 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:16 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:16.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:17 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:18 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:20.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:20 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592158 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:22 np0005592158 podman[254144]: 2026-01-22 15:32:22.070491435 +0000 UTC m=+0.060962941 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 22 10:32:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:22.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:22 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:23.895 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:32:23 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:23.896 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:32:23 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592158 ceph-mon[81715]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:32:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:25 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:25 np0005592158 ceph-mon[81715]: Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:25 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:32:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:27 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:29 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:31 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:31.898 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:32:32 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:33 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:33 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:34.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:35 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:35 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:36 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:37 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:38 np0005592158 podman[254344]: 2026-01-22 15:32:38.153430519 +0000 UTC m=+0.129239667 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:32:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:38 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:39 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:32:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:41 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:42 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:43 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:32:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:44.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:44.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:44 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:45 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:32:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:47 np0005592158 ceph-mon[81715]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 10:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:32:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:32:47.532 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:32:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:48 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:48 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:50 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:50 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:52 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:53 np0005592158 podman[254370]: 2026-01-22 15:32:53.074586767 +0000 UTC m=+0.054980519 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:32:53 np0005592158 ceph-mon[81715]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:53 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:54 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592158 ceph-mon[81715]: Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:56 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:56.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:56.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:57 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:58 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:32:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:58.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:59 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:59 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:00.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:00.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:00 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:03 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:04.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:04 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:05 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:08 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:08.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:09 np0005592158 podman[254390]: 2026-01-22 15:33:09.107943471 +0000 UTC m=+0.095587896 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 10:33:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:09 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:09 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:09 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:10.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:11 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:12 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:13 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:14.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:14 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:14 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:15 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:16 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:17 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:18.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:18 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:18 np0005592158 ceph-mon[81715]: Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:20 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:20.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:21 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:24 np0005592158 podman[254416]: 2026-01-22 15:33:24.06539459 +0000 UTC m=+0.051576037 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:33:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:24 np0005592158 ceph-mon[81715]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:24 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:26 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:28 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:28 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:28.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:29 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:30.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:30 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:31 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:32 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:34 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:34 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:34.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:35 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:33:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:36.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:37 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:37 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:33:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:38.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:38.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:38 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:39 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:39 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:40 np0005592158 podman[254566]: 2026-01-22 15:33:40.118514136 +0000 UTC m=+0.092933946 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 10:33:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:40.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:40 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:41 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:42.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:43 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:44.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:44 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:44 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:44 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:45 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:46.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:47 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.534 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.534 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:33:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:33:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:33:48 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:48.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:49 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:49 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:50 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:52 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #235. Immutable memtables: 0.
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.005185) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 151] Flushing memtable with next log file: 235
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034006045, "job": 151, "event": "flush_started", "num_memtables": 1, "num_entries": 2316, "num_deletes": 736, "total_data_size": 3741023, "memory_usage": 3824208, "flush_reason": "Manual Compaction"}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 151] Level-0 flush table #236: started
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034029180, "cf_name": "default", "job": 151, "event": "table_file_creation", "file_number": 236, "file_size": 2442903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115387, "largest_seqno": 117698, "table_properties": {"data_size": 2434191, "index_size": 4181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 29850, "raw_average_key_size": 21, "raw_value_size": 2411596, "raw_average_value_size": 1777, "num_data_blocks": 178, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 736, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095898, "oldest_key_time": 1769095898, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 151] Flush lasted 24035 microseconds, and 11438 cpu microseconds.
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.029241) [db/flush_job.cc:967] [default] [JOB 151] Level-0 flush table #236: 2442903 bytes OK
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.029265) [db/memtable_list.cc:519] [default] Level-0 commit table #236 started
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030700) [db/memtable_list.cc:722] [default] Level-0 commit table #236: memtable #1 done
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030721) EVENT_LOG_v1 {"time_micros": 1769096034030714, "job": 151, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.030743) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 151] Try to delete WAL files size 3728491, prev total WAL file size 3728491, number of live WAL files 2.
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000232.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.032428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035353338' seq:72057594037927935, type:22 .. '6C6F676D0035373931' seq:0, type:0; will stop at (end)
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 152] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 151 Base level 0, inputs: [236(2385KB)], [234(10MB)]
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034032537, "job": 152, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [236], "files_L6": [234], "score": -1, "input_data_size": 13839543, "oldest_snapshot_seqno": -1}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 152] Generated table #237: 14352 keys, 11969979 bytes, temperature: kUnknown
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034124087, "cf_name": "default", "job": 152, "event": "table_file_creation", "file_number": 237, "file_size": 11969979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11892081, "index_size": 40921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 395202, "raw_average_key_size": 27, "raw_value_size": 11648603, "raw_average_value_size": 811, "num_data_blocks": 1472, "num_entries": 14352, "num_filter_entries": 14352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 237, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.124973) [db/compaction/compaction_job.cc:1663] [default] [JOB 152] Compacted 1@0 + 1@6 files to L6 => 11969979 bytes
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.126990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 15841, records dropped: 1489 output_compression: NoCompression
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.127024) EVENT_LOG_v1 {"time_micros": 1769096034127009, "job": 152, "event": "compaction_finished", "compaction_time_micros": 92043, "compaction_time_cpu_micros": 35204, "output_level": 6, "num_output_files": 1, "total_output_size": 11969979, "num_input_records": 15841, "num_output_records": 14352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034128469, "job": 152, "event": "table_file_deletion", "file_number": 236}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000234.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034133084, "job": 152, "event": "table_file_deletion", "file_number": 234}
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.032315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:33:54.133175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:55 np0005592158 podman[254643]: 2026-01-22 15:33:55.088960437 +0000 UTC m=+0.073010386 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 10:33:55 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:55 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:56.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:57 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:33:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:58 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:59 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:59 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:00.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:01 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:02.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:02 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:03 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:05 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:05 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:06.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:06.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:07 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:07 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:08.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:08 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:08.839 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:34:08 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:08.840 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:34:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:10 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:10 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:10.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:11 np0005592158 podman[254662]: 2026-01-22 15:34:11.120998884 +0000 UTC m=+0.113396068 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:34:11 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:11.843 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:34:12 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:13 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:14 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:14 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:15 np0005592158 ceph-mon[81715]: 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:15 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:16.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:17 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:18.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:34:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 10:34:19 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:19 np0005592158 ceph-mon[81715]: Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:20 np0005592158 ceph-mon[81715]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 10:34:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:20.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:21 np0005592158 ceph-mon[81715]: 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 10:34:22 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:23 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:23 np0005592158 ceph-mon[81715]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:24 np0005592158 ceph-mon[81715]: Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:24 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:24.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:25 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592158 podman[254689]: 2026-01-22 15:34:26.059462394 +0000 UTC m=+0.055638057 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:34:26 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:27 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:28 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:29 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:29 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:32 np0005592158 ceph-mon[81715]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 10:34:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:32.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:33 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:34:33 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:34:33 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:34:33 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:34:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:34.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:34 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592158 ceph-mon[81715]: Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:34:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.5 total, 600.0 interval#012Cumulative writes: 16K writes, 50K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 16K writes, 6024 syncs, 2.82 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 701 writes, 1265 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:34:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:35 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:35 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:37 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:38 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:40 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:40 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:34:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:41 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:42 np0005592158 podman[254709]: 2026-01-22 15:34:42.104610443 +0000 UTC m=+0.088650709 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:34:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:34:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:42.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:42 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:43 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:34:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:34:45 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:34:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:46 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:34:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:34:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #238. Immutable memtables: 0.
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.815216) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 153] Flushing memtable with next log file: 238
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087815248, "job": 153, "event": "flush_started", "num_memtables": 1, "num_entries": 1026, "num_deletes": 346, "total_data_size": 1638939, "memory_usage": 1657928, "flush_reason": "Manual Compaction"}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 153] Level-0 flush table #239: started
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087823408, "cf_name": "default", "job": 153, "event": "table_file_creation", "file_number": 239, "file_size": 1076604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 117703, "largest_seqno": 118724, "table_properties": {"data_size": 1071940, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14126, "raw_average_key_size": 22, "raw_value_size": 1061306, "raw_average_value_size": 1692, "num_data_blocks": 84, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 346, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096034, "oldest_key_time": 1769096034, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 153] Flush lasted 8230 microseconds, and 3650 cpu microseconds.
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.823445) [db/flush_job.cc:967] [default] [JOB 153] Level-0 flush table #239: 1076604 bytes OK
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.823461) [db/memtable_list.cc:519] [default] Level-0 commit table #239 started
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824780) [db/memtable_list.cc:722] [default] Level-0 commit table #239: memtable #1 done
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824828) EVENT_LOG_v1 {"time_micros": 1769096087824817, "job": 153, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.824854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 153] Try to delete WAL files size 1633283, prev total WAL file size 1633283, number of live WAL files 2.
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000235.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 154] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 153 Base level 0, inputs: [239(1051KB)], [237(11MB)]
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087825761, "job": 154, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [239], "files_L6": [237], "score": -1, "input_data_size": 13046583, "oldest_snapshot_seqno": -1}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 154] Generated table #240: 14268 keys, 11331039 bytes, temperature: kUnknown
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087920982, "cf_name": "default", "job": 154, "event": "table_file_creation", "file_number": 240, "file_size": 11331039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253991, "index_size": 40263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 393638, "raw_average_key_size": 27, "raw_value_size": 11012139, "raw_average_value_size": 771, "num_data_blocks": 1445, "num_entries": 14268, "num_filter_entries": 14268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 240, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921326) [db/compaction/compaction_job.cc:1663] [default] [JOB 154] Compacted 1@0 + 1@6 files to L6 => 11331039 bytes
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.922383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.8 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(22.6) write-amplify(10.5) OK, records in: 14979, records dropped: 711 output_compression: NoCompression
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.922404) EVENT_LOG_v1 {"time_micros": 1769096087922394, "job": 154, "event": "compaction_finished", "compaction_time_micros": 95382, "compaction_time_cpu_micros": 54677, "output_level": 6, "num_output_files": 1, "total_output_size": 11331039, "num_input_records": 14979, "num_output_records": 14268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000239.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087922758, "job": 154, "event": "table_file_deletion", "file_number": 239}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000237.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087925344, "job": 154, "event": "table_file_deletion", "file_number": 237}
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:34:47.925424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:48.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:34:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:48 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:49 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:49 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:50.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:50 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:52.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:52.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:53 np0005592158 ceph-mon[81715]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:54 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:54 np0005592158 ceph-mon[81715]: Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:54.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:55 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:55 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592158 ceph-mon[81715]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:57 np0005592158 podman[255036]: 2026-01-22 15:34:57.055787893 +0000 UTC m=+0.043929829 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:34:57 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:34:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:58 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:59 np0005592158 ceph-mon[81715]: Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:59 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:00 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:01 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:02 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:05 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:05 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:06 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:06.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:07 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:09 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:09 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:10 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:11 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:13 np0005592158 podman[255056]: 2026-01-22 15:35:13.133808873 +0000 UTC m=+0.108745422 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 22 10:35:13 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:14.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:15 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:17 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:17 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:18.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:19 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:19 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:21 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:22.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:23 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:25 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:25 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592158 ceph-mon[81715]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:26.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:27 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:28 np0005592158 podman[255084]: 2026-01-22 15:35:28.088950359 +0000 UTC m=+0.068729690 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 10:35:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:28 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:29 np0005592158 ceph-mon[81715]: Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:29 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:30.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:35:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 21K writes, 119K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1754 writes, 10K keys, 1754 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s#012Interval WAL: 1754 writes, 1754 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     77.5      1.64              0.42        77    0.021       0      0       0.0       0.0#012  L6      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    136.8    118.5      6.34              2.34        76    0.083    834K    46K       0.0       0.0#012 Sum      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.9      0.1       0.0   6.9    108.7    110.1      7.97              2.77       153    0.052    834K    46K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     70.4     71.1      1.10              0.27        12    0.092     91K   5756       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    136.8    118.5      6.34              2.34        76    0.083    834K    46K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     77.6      1.63              0.42        76    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.124, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.86 GB write, 0.12 MB/s write, 0.85 GB read, 0.12 MB/s read, 8.0 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 88.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.00041 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4591,83.15 MB,27.3512%) FilterBlock(153,2.27 MB,0.74629%) IndexBlock(153,2.77 MB,0.909996%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:35:32 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:34 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:34.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:35 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:35 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:36.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:36 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:37 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:38.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:38.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:38 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:39 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:39 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:40 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:43 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:43 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592158 podman[255104]: 2026-01-22 15:35:44.111422646 +0000 UTC m=+0.095133153 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 10:35:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:44.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:44.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:44 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:44 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:45 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:46.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:46.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:46 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.535 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.536 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:35:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:35:47.536 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:35:48 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:49 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:50 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:50 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:51 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:52 np0005592158 podman[255400]: 2026-01-22 15:35:52.948509844 +0000 UTC m=+0.049887250 container create 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 22 10:35:52 np0005592158 systemd[1]: Started libpod-conmon-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope.
Jan 22 10:35:53 np0005592158 systemd[1]: Started libcrun container.
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:52.930917289 +0000 UTC m=+0.032294725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:53.027502211 +0000 UTC m=+0.128879637 container init 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:53.040270857 +0000 UTC m=+0.141648273 container start 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:53.044512771 +0000 UTC m=+0.145890217 container attach 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 22 10:35:53 np0005592158 systemd[1]: libpod-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope: Deactivated successfully.
Jan 22 10:35:53 np0005592158 fervent_beaver[255416]: 167 167
Jan 22 10:35:53 np0005592158 conmon[255416]: conmon 3bfcc6a59f47aaa43c01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope/container/memory.events
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:53.052207829 +0000 UTC m=+0.153585285 container died 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 22 10:35:53 np0005592158 systemd[1]: var-lib-containers-storage-overlay-3ad58e0a8fb98a7564e59262c086159244ad1eae06b5586206a5b4b26d323fe8-merged.mount: Deactivated successfully.
Jan 22 10:35:53 np0005592158 podman[255400]: 2026-01-22 15:35:53.09843554 +0000 UTC m=+0.199812956 container remove 3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 10:35:53 np0005592158 systemd[1]: libpod-conmon-3bfcc6a59f47aaa43c017ae22da6c5d7abb8c3c9e0a33f9795cd69738670f9fd.scope: Deactivated successfully.
Jan 22 10:35:53 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:53 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:53 np0005592158 podman[255440]: 2026-01-22 15:35:53.321442862 +0000 UTC m=+0.056670184 container create db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:35:53 np0005592158 systemd[1]: Started libpod-conmon-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope.
Jan 22 10:35:53 np0005592158 podman[255440]: 2026-01-22 15:35:53.29475653 +0000 UTC m=+0.029983952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:35:53 np0005592158 systemd[1]: Started libcrun container.
Jan 22 10:35:53 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592158 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:35:53 np0005592158 podman[255440]: 2026-01-22 15:35:53.437843571 +0000 UTC m=+0.173070943 container init db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:35:53 np0005592158 podman[255440]: 2026-01-22 15:35:53.445260841 +0000 UTC m=+0.180488203 container start db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 10:35:53 np0005592158 podman[255440]: 2026-01-22 15:35:53.449054974 +0000 UTC m=+0.184282346 container attach db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:35:54 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:54 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]: [
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:    {
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "available": false,
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "ceph_device": false,
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "lsm_data": {},
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "lvs": [],
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "path": "/dev/sr0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "rejected_reasons": [
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "Insufficient space (<5GB)",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "Has a FileSystem"
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        ],
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        "sys_api": {
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "actuators": null,
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "device_nodes": "sr0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "devname": "sr0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "human_readable_size": "482.00 KB",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "id_bus": "ata",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "model": "QEMU DVD-ROM",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "nr_requests": "2",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "parent": "/dev/sr0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "partitions": {},
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "path": "/dev/sr0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "removable": "1",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "rev": "2.5+",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "ro": "0",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "rotational": "1",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "sas_address": "",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "sas_device_handle": "",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "scheduler_mode": "mq-deadline",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "sectors": 0,
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "sectorsize": "2048",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "size": 493568.0,
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "support_discard": "2048",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "type": "disk",
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:            "vendor": "QEMU"
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:        }
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]:    }
Jan 22 10:35:54 np0005592158 flamboyant_colden[255457]: ]
Jan 22 10:35:54 np0005592158 systemd[1]: libpod-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Deactivated successfully.
Jan 22 10:35:54 np0005592158 podman[255440]: 2026-01-22 15:35:54.652616259 +0000 UTC m=+1.387843581 container died db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 10:35:54 np0005592158 systemd[1]: libpod-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Consumed 1.233s CPU time.
Jan 22 10:35:54 np0005592158 systemd[1]: var-lib-containers-storage-overlay-2ea0a337a00fa82ba32b629835f3d595a12b0b4ca9abc8d6a8dd5505e0f0ddb8-merged.mount: Deactivated successfully.
Jan 22 10:35:54 np0005592158 podman[255440]: 2026-01-22 15:35:54.710405023 +0000 UTC m=+1.445632345 container remove db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:54 np0005592158 systemd[1]: libpod-conmon-db1de7d224b8d73b95228ffc14ee54f1c045b4312ada1a22bedaaad4efa1f63a.scope: Deactivated successfully.
Jan 22 10:35:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:35:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:57 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:58 np0005592158 ceph-mon[81715]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:58 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:35:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:35:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:59 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:59 np0005592158 podman[256727]: 2026-01-22 15:35:59.085355722 +0000 UTC m=+0.069886981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:00 np0005592158 ceph-mon[81715]: Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:00 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:01 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:01 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:02 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:03 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:04.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:05 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:05 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:06.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:07 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:07 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:08.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:08.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:09 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:09 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:10 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:11 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:11 np0005592158 ceph-mgr[82073]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:36:12 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:13 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:14 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:15 np0005592158 podman[256797]: 2026-01-22 15:36:15.429478282 +0000 UTC m=+0.176368792 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:36:15 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:16.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:16 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:17 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:36:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:36:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:36:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:36:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:18.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:18 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:19 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:19 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:20.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:20 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:22.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:22.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:23 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:24.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:25 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:25 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:25 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:26 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44556f0 =====
Jan 22 10:36:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44556f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44556f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:29 np0005592158 ceph-mon[81715]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:29 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:30 np0005592158 podman[256823]: 2026-01-22 15:36:30.128518112 +0000 UTC m=+0.113476120 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:36:30 np0005592158 ceph-mon[81715]: Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:30 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:30.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:31 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:32.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:33 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:34.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:34 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592158 ceph-mon[81715]: Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:36 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:36.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:37 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:37 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:39 np0005592158 ceph-mon[81715]: Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:40.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:40 np0005592158 ceph-mon[81715]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:40 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:42 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:42.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:43 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592158 ceph-mon[81715]: Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:44.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:45 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:45 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:46 np0005592158 podman[256845]: 2026-01-22 15:36:46.115385956 +0000 UTC m=+0.096553583 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:36:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:46.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:47 np0005592158 ceph-mon[81715]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:36:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:36:47.538 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:36:48 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:49 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:50 np0005592158 ceph-mon[81715]: Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:50 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:53 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:55 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:57 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:58.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:36:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:59 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:59 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:00 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:01 np0005592158 podman[256873]: 2026-01-22 15:37:01.118463739 +0000 UTC m=+0.100504829 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 10:37:01 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:02 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:03 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:37:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:06 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:07 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:08 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:09 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:10 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:10 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:11 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:11 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:12 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:13 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:15 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 10:37:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 10:37:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:17.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:17 np0005592158 podman[257073]: 2026-01-22 15:37:17.157054112 +0000 UTC m=+0.142934847 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Jan 22 10:37:17 np0005592158 ceph-mon[81715]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:18 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:37:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:37:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:18 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:37:18 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:37:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:19.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:19 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:19 np0005592158 ceph-mon[81715]: Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:20 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:21.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:21 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:22 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:23.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:23 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:24 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592158 ceph-mon[81715]: Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:25 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:27.049 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:37:27 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:27.050 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:37:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:28 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:29 np0005592158 ceph-mon[81715]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:29 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:30 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:30 np0005592158 ceph-mon[81715]: Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:31 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592158 podman[257100]: 2026-01-22 15:37:32.097992147 +0000 UTC m=+0.072873732 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:37:32 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:33 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:34.052 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:37:34 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:35.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:35 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:37.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:37 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #241. Immutable memtables: 0.
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.470892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 155] Flushing memtable with next log file: 241
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258470952, "job": 155, "event": "flush_started", "num_memtables": 1, "num_entries": 2744, "num_deletes": 540, "total_data_size": 5170603, "memory_usage": 5230480, "flush_reason": "Manual Compaction"}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 155] Level-0 flush table #242: started
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258494595, "cf_name": "default", "job": 155, "event": "table_file_creation", "file_number": 242, "file_size": 3350889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 118729, "largest_seqno": 121468, "table_properties": {"data_size": 3340601, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32349, "raw_average_key_size": 23, "raw_value_size": 3316033, "raw_average_value_size": 2397, "num_data_blocks": 239, "num_entries": 1383, "num_filter_entries": 1383, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096088, "oldest_key_time": 1769096088, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 242, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 155] Flush lasted 23781 microseconds, and 12020 cpu microseconds.
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494644) [db/flush_job.cc:967] [default] [JOB 155] Level-0 flush table #242: 3350889 bytes OK
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494703) [db/memtable_list.cc:519] [default] Level-0 commit table #242 started
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495886) [db/memtable_list.cc:722] [default] Level-0 commit table #242: memtable #1 done
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495897) EVENT_LOG_v1 {"time_micros": 1769096258495893, "job": 155, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.495914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 155] Try to delete WAL files size 5157083, prev total WAL file size 5157083, number of live WAL files 2.
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000238.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.497224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 156] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 155 Base level 0, inputs: [242(3272KB)], [240(10MB)]
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258497279, "job": 156, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [242], "files_L6": [240], "score": -1, "input_data_size": 14681928, "oldest_snapshot_seqno": -1}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 156] Generated table #243: 14554 keys, 12846152 bytes, temperature: kUnknown
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258592808, "cf_name": "default", "job": 156, "event": "table_file_creation", "file_number": 243, "file_size": 12846152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12765812, "index_size": 42851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 399045, "raw_average_key_size": 27, "raw_value_size": 12517844, "raw_average_value_size": 860, "num_data_blocks": 1556, "num_entries": 14554, "num_filter_entries": 14554, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 243, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.593101) [db/compaction/compaction_job.cc:1663] [default] [JOB 156] Compacted 1@0 + 1@6 files to L6 => 12846152 bytes
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.594771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 15651, records dropped: 1097 output_compression: NoCompression
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.594793) EVENT_LOG_v1 {"time_micros": 1769096258594782, "job": 156, "event": "compaction_finished", "compaction_time_micros": 95607, "compaction_time_cpu_micros": 39342, "output_level": 6, "num_output_files": 1, "total_output_size": 12846152, "num_input_records": 15651, "num_output_records": 14554, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000242.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258595584, "job": 156, "event": "table_file_deletion", "file_number": 242}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000240.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258598207, "job": 156, "event": "table_file_deletion", "file_number": 240}
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.497123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:39 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:39 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:40 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:41 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:43 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:45 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:46 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:47 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.539 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:37:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:37:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:37:48 np0005592158 podman[257119]: 2026-01-22 15:37:48.134012703 +0000 UTC m=+0.122720470 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:37:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:49 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:49 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:50 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:50.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:51 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:52 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:53 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:56 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:56.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:57.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:57 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:57 np0005592158 ceph-mon[81715]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:37:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:37:59 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:37:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:59.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:00 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592158 ceph-mon[81715]: Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 10:38:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:01.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:01 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:02.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:02 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 10:38:02 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:03 np0005592158 podman[257145]: 2026-01-22 15:38:03.094034073 +0000 UTC m=+0.074094207 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:38:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:03.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:04 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:04 np0005592158 ceph-mon[81715]: 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 10:38:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:05.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:05 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:05 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:06 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:07.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:07 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:09 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 10:38:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:10 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:11.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:11 np0005592158 podman[257338]: 2026-01-22 15:38:11.362862379 +0000 UTC m=+0.057077915 container exec 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 22 10:38:11 np0005592158 podman[257338]: 2026-01-22 15:38:11.50228236 +0000 UTC m=+0.196497906 container exec_died 50d1ea49dfe76aa000ad6d67b1b7faf4493fc69d8e2ec4e2740b4159c929f891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 10:38:11 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:13.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:13 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:13 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:38:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:14.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:15 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:15.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:16 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:17 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:18 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:19 np0005592158 podman[257593]: 2026-01-22 15:38:19.131451064 +0000 UTC m=+0.110572282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 10:38:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:19 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:19 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:20 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:21.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:21 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:21 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:22 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:23.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:23 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:24 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:25.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:25 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:26.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:26 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:27 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:29 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:29.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:29 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:30 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:30 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:30.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:31 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:31.238 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:38:31 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:31.240 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:38:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:31.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:32 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:32.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:33.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592158 podman[257670]: 2026-01-22 15:38:34.085433439 +0000 UTC m=+0.064129894 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:38:34 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:34 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:34.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:35 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:37 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:37 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:37.242 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:38:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:38 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:39 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:40 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:40 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:41 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:42 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:42.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:43.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:43 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:45.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:45 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:46.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.540 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:38:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:38:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:38:47 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:48.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #244. Immutable memtables: 0.
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.718511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 157] Flushing memtable with next log file: 244
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329718543, "job": 157, "event": "flush_started", "num_memtables": 1, "num_entries": 1301, "num_deletes": 380, "total_data_size": 2116724, "memory_usage": 2160112, "flush_reason": "Manual Compaction"}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 157] Level-0 flush table #245: started
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329728944, "cf_name": "default", "job": 157, "event": "table_file_creation", "file_number": 245, "file_size": 1389488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 121473, "largest_seqno": 122769, "table_properties": {"data_size": 1384040, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16488, "raw_average_key_size": 22, "raw_value_size": 1371300, "raw_average_value_size": 1838, "num_data_blocks": 104, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 380, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096259, "oldest_key_time": 1769096259, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 245, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 157] Flush lasted 10469 microseconds, and 4427 cpu microseconds.
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.728982) [db/flush_job.cc:967] [default] [JOB 157] Level-0 flush table #245: 1389488 bytes OK
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.728998) [db/memtable_list.cc:519] [default] Level-0 commit table #245 started
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.729973) [db/memtable_list.cc:722] [default] Level-0 commit table #245: memtable #1 done
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.729986) EVENT_LOG_v1 {"time_micros": 1769096329729982, "job": 157, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 157] Try to delete WAL files size 2109829, prev total WAL file size 2109829, number of live WAL files 2.
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000241.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035373930' seq:72057594037927935, type:22 .. '6C6F676D0036303433' seq:0, type:0; will stop at (end)
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 158] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 157 Base level 0, inputs: [245(1356KB)], [243(12MB)]
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329730796, "job": 158, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [245], "files_L6": [243], "score": -1, "input_data_size": 14235640, "oldest_snapshot_seqno": -1}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 158] Generated table #246: 14521 keys, 14041153 bytes, temperature: kUnknown
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329828277, "cf_name": "default", "job": 158, "event": "table_file_creation", "file_number": 246, "file_size": 14041153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959442, "index_size": 44286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 398934, "raw_average_key_size": 27, "raw_value_size": 13710447, "raw_average_value_size": 944, "num_data_blocks": 1614, "num_entries": 14521, "num_filter_entries": 14521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 246, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.828500) [db/compaction/compaction_job.cc:1663] [default] [JOB 158] Compacted 1@0 + 1@6 files to L6 => 14041153 bytes
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.829576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 15300, records dropped: 779 output_compression: NoCompression
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.829591) EVENT_LOG_v1 {"time_micros": 1769096329829584, "job": 158, "event": "compaction_finished", "compaction_time_micros": 97540, "compaction_time_cpu_micros": 63720, "output_level": 6, "num_output_files": 1, "total_output_size": 14041153, "num_input_records": 15300, "num_output_records": 14521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000245.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329829908, "job": 158, "event": "table_file_deletion", "file_number": 245}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000243.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329832245, "job": 158, "event": "table_file_deletion", "file_number": 243}
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.730579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.832317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.832326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:38:49.833079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:50 np0005592158 podman[257689]: 2026-01-22 15:38:50.100479778 +0000 UTC m=+0.085593366 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 10:38:50 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:50.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:38:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:38:51 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:52 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:52.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:53 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:54 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:54.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:55 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:55.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:56 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:57.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:57 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:38:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:59 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:00 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:00 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:39:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:00.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:39:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:02 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:03.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:03 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:05 np0005592158 podman[257715]: 2026-01-22 15:39:05.053342703 +0000 UTC m=+0.046969692 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:39:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:05 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:05 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:06.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:08.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:09 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:09.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:09 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:09.916 139715 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:39:09 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:09.917 139715 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:39:10 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:10 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:39:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:10.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:39:11 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:11.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:12 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:12 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:12 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:12 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:12.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:13 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:14 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:14 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:14 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:14 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:14 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:14.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:15 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:15 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:15 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:15.919 139715 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c803af81-5cf0-46ac-8f46-401e876a838c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:39:16 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:16 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:16 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:17 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:18 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:18 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:18 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:19.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:19 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:19 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:20 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:20 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:20 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:21 np0005592158 podman[257736]: 2026-01-22 15:39:21.115489925 +0000 UTC m=+0.112513545 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:39:21 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:21 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:22 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:22 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:39:22 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:22.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:39:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:23.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:23 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:24 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:24 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:24 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:25 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:25 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:26 np0005592158 ceph-mon[81715]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:26 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:26 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:26 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:26 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:26 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:39:28 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:39:28 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:28 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:28 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:28.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:29 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:30 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:30 np0005592158 ceph-mon[81715]: Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:30 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:30 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:30 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:30.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:31 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:32 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:32 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:32 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:32 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:32.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:34 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:34 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:34 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:36 np0005592158 podman[257943]: 2026-01-22 15:39:36.06654735 +0000 UTC m=+0.061516895 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:39:36 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:36 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:36 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:36 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:36 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:37 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:38 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:38 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:38 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:39:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:39:39 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:40 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:40 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:40 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:40 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:40 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:40.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:41.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:41 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:42 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:42 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:42.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:43 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:44 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:44 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:45 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:45 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:46 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:46 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:46 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:46 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.541 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:39:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:39:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:39:47 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:48 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:48 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:48 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:48 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:49 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:50 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:50 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:50 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:50 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:51 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592158 podman[257962]: 2026-01-22 15:39:52.107578919 +0000 UTC m=+0.090335616 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:39:52 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:52 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:52 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:53 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:54 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:54 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:54 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:54 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:55 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:55 np0005592158 ceph-mon[81715]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:56 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:56 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:56 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:56.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:56 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:57.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:57 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:58 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:58 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:58 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:58.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:59 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:39:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:00 np0005592158 ceph-mon[81715]: Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:00 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:00 np0005592158 ceph-mon[81715]: Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592158 ceph-mon[81715]: [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:00 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:00 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:40:00 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:40:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:02 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:02 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:02 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:02 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:03 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #247. Immutable memtables: 0.
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.829196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 159] Flushing memtable with next log file: 247
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404829249, "job": 159, "event": "flush_started", "num_memtables": 1, "num_entries": 1346, "num_deletes": 384, "total_data_size": 2279152, "memory_usage": 2306344, "flush_reason": "Manual Compaction"}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 159] Level-0 flush table #248: started
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404837938, "cf_name": "default", "job": 159, "event": "table_file_creation", "file_number": 248, "file_size": 989946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 122774, "largest_seqno": 124115, "table_properties": {"data_size": 985175, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16963, "raw_average_key_size": 23, "raw_value_size": 973300, "raw_average_value_size": 1333, "num_data_blocks": 77, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 384, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096330, "oldest_key_time": 1769096330, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 248, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 159] Flush lasted 8761 microseconds, and 4369 cpu microseconds.
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837973) [db/flush_job.cc:967] [default] [JOB 159] Level-0 flush table #248: 989946 bytes OK
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.837989) [db/memtable_list.cc:519] [default] Level-0 commit table #248 started
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839181) [db/memtable_list.cc:722] [default] Level-0 commit table #248: memtable #1 done
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839196) EVENT_LOG_v1 {"time_micros": 1769096404839192, "job": 159, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 159] Try to delete WAL files size 2272058, prev total WAL file size 2272058, number of live WAL files 2.
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000244.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.840193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 160] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 159 Base level 0, inputs: [248(966KB)], [246(13MB)]
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404840305, "job": 160, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [248], "files_L6": [246], "score": -1, "input_data_size": 15031099, "oldest_snapshot_seqno": -1}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 160] Generated table #249: 14500 keys, 11547156 bytes, temperature: kUnknown
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404926893, "cf_name": "default", "job": 160, "event": "table_file_creation", "file_number": 249, "file_size": 11547156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11469209, "index_size": 40586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 398265, "raw_average_key_size": 27, "raw_value_size": 11224120, "raw_average_value_size": 774, "num_data_blocks": 1460, "num_entries": 14500, "num_filter_entries": 14500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 249, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.927228) [db/compaction/compaction_job.cc:1663] [default] [JOB 160] Compacted 1@0 + 1@6 files to L6 => 11547156 bytes
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.928799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.4 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(26.8) write-amplify(11.7) OK, records in: 15251, records dropped: 751 output_compression: NoCompression
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.928825) EVENT_LOG_v1 {"time_micros": 1769096404928813, "job": 160, "event": "compaction_finished", "compaction_time_micros": 86678, "compaction_time_cpu_micros": 35825, "output_level": 6, "num_output_files": 1, "total_output_size": 11547156, "num_input_records": 15251, "num_output_records": 14500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000248.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404929234, "job": 160, "event": "table_file_deletion", "file_number": 248}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000246.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404934265, "job": 160, "event": "table_file_deletion", "file_number": 246}
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.840095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:04.934408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:04 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:04 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:05 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:05 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:06 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:06 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:06.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:07 np0005592158 podman[257989]: 2026-01-22 15:40:07.086623313 +0000 UTC m=+0.058402161 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:40:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:07 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:08 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:08 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:08 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:09 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:10 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:10 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:10 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:10 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:10 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:11 np0005592158 ceph-mon[81715]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:12 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:14 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:15.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:15.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:15 np0005592158 ceph-mon[81715]: Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:16 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:17.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:17 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:19.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:19.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:19 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:20 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:21.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:21 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:22 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:23 np0005592158 podman[258011]: 2026-01-22 15:40:23.119414859 +0000 UTC m=+0.093433948 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 10:40:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:23.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:24 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:25 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:25 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:26 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:27 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:28 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:29 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:30 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:31.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #250. Immutable memtables: 0.
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.969364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 161] Flushing memtable with next log file: 250
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431969400, "job": 161, "event": "flush_started", "num_memtables": 1, "num_entries": 636, "num_deletes": 298, "total_data_size": 728167, "memory_usage": 741176, "flush_reason": "Manual Compaction"}
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 161] Level-0 flush table #251: started
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431974811, "cf_name": "default", "job": 161, "event": "table_file_creation", "file_number": 251, "file_size": 476871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124120, "largest_seqno": 124751, "table_properties": {"data_size": 473883, "index_size": 831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9239, "raw_average_key_size": 21, "raw_value_size": 467164, "raw_average_value_size": 1076, "num_data_blocks": 36, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 298, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096405, "oldest_key_time": 1769096405, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 251, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 161] Flush lasted 5484 microseconds, and 1765 cpu microseconds.
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.974849) [db/flush_job.cc:967] [default] [JOB 161] Level-0 flush table #251: 476871 bytes OK
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.974863) [db/memtable_list.cc:519] [default] Level-0 commit table #251 started
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975943) [db/memtable_list.cc:722] [default] Level-0 commit table #251: memtable #1 done
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975954) EVENT_LOG_v1 {"time_micros": 1769096431975951, "job": 161, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 161] Try to delete WAL files size 724377, prev total WAL file size 724377, number of live WAL files 2.
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000247.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976401) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 162] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 161 Base level 0, inputs: [251(465KB)], [249(11MB)]
Jan 22 10:40:31 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431976438, "job": 162, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [251], "files_L6": [249], "score": -1, "input_data_size": 12024027, "oldest_snapshot_seqno": -1}
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 162] Generated table #252: 14329 keys, 10215738 bytes, temperature: kUnknown
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432039925, "cf_name": "default", "job": 162, "event": "table_file_creation", "file_number": 252, "file_size": 10215738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10140138, "index_size": 38687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 394998, "raw_average_key_size": 27, "raw_value_size": 9899258, "raw_average_value_size": 690, "num_data_blocks": 1380, "num_entries": 14329, "num_filter_entries": 14329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 252, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.040156) [db/compaction/compaction_job.cc:1663] [default] [JOB 162] Compacted 1@0 + 1@6 files to L6 => 10215738 bytes
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.042126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.2 rd, 160.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(46.6) write-amplify(21.4) OK, records in: 14934, records dropped: 605 output_compression: NoCompression
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.042143) EVENT_LOG_v1 {"time_micros": 1769096432042134, "job": 162, "event": "compaction_finished", "compaction_time_micros": 63556, "compaction_time_cpu_micros": 25756, "output_level": 6, "num_output_files": 1, "total_output_size": 10215738, "num_input_records": 14934, "num_output_records": 14329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000251.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432042308, "job": 162, "event": "table_file_deletion", "file_number": 251}
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000249.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432044350, "job": 162, "event": "table_file_deletion", "file_number": 249}
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:31.976331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:40:32.044444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:32 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:40:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:35.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:40:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:40:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:36 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:37.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:37 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:38 np0005592158 podman[258285]: 2026-01-22 15:40:38.055254377 +0000 UTC m=+0.051614928 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:40:38 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:39.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:39.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:39 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:40 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:40 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:41.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:42 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:43 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:43 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:45.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:45 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:45 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:45 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:47.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:40:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:40:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:40:47 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:47 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:49 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:49 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:50 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:50 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:51.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:53 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:54 np0005592158 podman[258355]: 2026-01-22 15:40:54.100646484 +0000 UTC m=+0.082912183 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 22 10:40:54 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:55.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:55 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:55 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:56 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:57.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:57 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:58 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:59.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:40:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:00 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:00 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:01.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:41:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:01 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:03.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:41:03 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:04 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:05.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:05 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:06 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:07.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:07 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:08 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:09 np0005592158 podman[258381]: 2026-01-22 15:41:09.056054969 +0000 UTC m=+0.051131335 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:41:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:09.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:10 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:11.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:11 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:11 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:12 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:13.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:13 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:14 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:15.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:15 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:15 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:16 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:17 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:19 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:21 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:23.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:23 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:25.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:25 np0005592158 podman[258400]: 2026-01-22 15:41:25.130096963 +0000 UTC m=+0.108737302 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 10:41:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:41:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:25 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:25 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:27.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:41:27 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:28 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:29 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:30 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:30 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:31 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:33 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:33.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:34 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:35 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:35 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:36 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:36 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:41:37 np0005592158 ceph-mon[81715]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:38 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:40 np0005592158 podman[258426]: 2026-01-22 15:41:40.067928372 +0000 UTC m=+0.055418981 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:41:40 np0005592158 ceph-mon[81715]: Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:40 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:41.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:41.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:42 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:43 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:45.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:46 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:47 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:41:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:41:47.542 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:41:48 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:41:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:48 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:41:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:49 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:50 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:50 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:51.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:51 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:52 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:53 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:55 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:55 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:55 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:56 np0005592158 podman[258627]: 2026-01-22 15:41:56.10704384 +0000 UTC m=+0.093787757 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:41:56 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:57 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:59 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:41:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:00 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:00 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:00 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:01 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:01 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:02 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:04 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:05 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:05 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:06 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:06 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:07.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:07 np0005592158 ceph-mon[81715]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:09 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:10 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:10 np0005592158 ceph-mon[81715]: Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:10 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:11 np0005592158 podman[258653]: 2026-01-22 15:42:11.054857918 +0000 UTC m=+0.047845855 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:42:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:11.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:11.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:11 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:13.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:13 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:13.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:14 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:14 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:15.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:15 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:15 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:15.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:15 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:16 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:17.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:18 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592158 ceph-mon[81715]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:19.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:20 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:20 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:21 np0005592158 ceph-mon[81715]: Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:21 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:22 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:23 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:25 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:25 np0005592158 ceph-mon[81715]: Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:25.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:25 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:26 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:27 np0005592158 podman[258672]: 2026-01-22 15:42:27.106324079 +0000 UTC m=+0.096302486 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:42:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:27 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:28 np0005592158 ceph-mon[81715]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:29 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:30 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:31 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:31 np0005592158 ceph-mon[81715]: Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:31 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:42:31 np0005592158 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:42:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:31.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:32 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:34 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:35 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:35 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:36 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:37 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:38 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:39 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:40 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:40 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:41 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:42 np0005592158 podman[258700]: 2026-01-22 15:42:42.066954715 +0000 UTC m=+0.048179126 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 10:42:42 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:43 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:43.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:44 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:45 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:45.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:46 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:42:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:42:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:42:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:47 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:49 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:49.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:50 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:51 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:51.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:52 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:53 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:53.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:54 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:55 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:42:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:42:55 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:42:55 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:56 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:57 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:57.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:58 np0005592158 podman[258850]: 2026-01-22 15:42:58.127860202 +0000 UTC m=+0.111611610 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 10:42:58 np0005592158 ceph-mon[81715]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:59 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:42:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:42:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:00 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:00 np0005592158 ceph-mon[81715]: Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:01.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:01 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:02 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:02 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:03.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:03 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:04 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:05 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:06 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:07.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:07 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:07.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:08 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:09 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:09.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:10 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:10 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:11.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:11 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592158 podman[258927]: 2026-01-22 15:43:13.053587293 +0000 UTC m=+0.044986178 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:43:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:14 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:15 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:17.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:17 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:18 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:18 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:19 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:20 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:20 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:21 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:21.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #253. Immutable memtables: 0.
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.647773) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 163] Flushing memtable with next log file: 253
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602647803, "job": 163, "event": "flush_started", "num_memtables": 1, "num_entries": 2748, "num_deletes": 540, "total_data_size": 5066096, "memory_usage": 5144136, "flush_reason": "Manual Compaction"}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 163] Level-0 flush table #254: started
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602669790, "cf_name": "default", "job": 163, "event": "table_file_creation", "file_number": 254, "file_size": 3292028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124756, "largest_seqno": 127499, "table_properties": {"data_size": 3281741, "index_size": 5692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32521, "raw_average_key_size": 23, "raw_value_size": 3257045, "raw_average_value_size": 2344, "num_data_blocks": 239, "num_entries": 1389, "num_filter_entries": 1389, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096432, "oldest_key_time": 1769096432, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 254, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 163] Flush lasted 22059 microseconds, and 10168 cpu microseconds.
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669829) [db/flush_job.cc:967] [default] [JOB 163] Level-0 flush table #254: 3292028 bytes OK
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669848) [db/memtable_list.cc:519] [default] Level-0 commit table #254 started
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672489) [db/memtable_list.cc:722] [default] Level-0 commit table #254: memtable #1 done
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672506) EVENT_LOG_v1 {"time_micros": 1769096602672500, "job": 163, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.672524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 163] Try to delete WAL files size 5052548, prev total WAL file size 5052548, number of live WAL files 2.
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000250.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.673996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 164] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 163 Base level 0, inputs: [254(3214KB)], [252(9976KB)]
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602674034, "job": 164, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [254], "files_L6": [252], "score": -1, "input_data_size": 13507766, "oldest_snapshot_seqno": -1}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 164] Generated table #255: 14621 keys, 11671635 bytes, temperature: kUnknown
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602753491, "cf_name": "default", "job": 164, "event": "table_file_creation", "file_number": 255, "file_size": 11671635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11592585, "index_size": 41369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36613, "raw_key_size": 400577, "raw_average_key_size": 27, "raw_value_size": 11345137, "raw_average_value_size": 775, "num_data_blocks": 1495, "num_entries": 14621, "num_filter_entries": 14621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 255, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.753851) [db/compaction/compaction_job.cc:1663] [default] [JOB 164] Compacted 1@0 + 1@6 files to L6 => 11671635 bytes
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.755109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.8 rd, 146.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 15718, records dropped: 1097 output_compression: NoCompression
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.755130) EVENT_LOG_v1 {"time_micros": 1769096602755121, "job": 164, "event": "compaction_finished", "compaction_time_micros": 79554, "compaction_time_cpu_micros": 45668, "output_level": 6, "num_output_files": 1, "total_output_size": 11671635, "num_input_records": 15718, "num_output_records": 14621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000254.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602756085, "job": 164, "event": "table_file_deletion", "file_number": 254}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000252.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602758530, "job": 164, "event": "table_file_deletion", "file_number": 252}
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.673888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:22.758609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:23 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:23.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:24 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:25.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:25 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:26 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:27 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:29 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:29 np0005592158 podman[258946]: 2026-01-22 15:43:29.151995195 +0000 UTC m=+0.127481610 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:43:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:29.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:30 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:31 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:32 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:33 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:34 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:35.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:36 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:37 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:38 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:39 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #256. Immutable memtables: 0.
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.131950) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:856] [default] [JOB 165] Flushing memtable with next log file: 256
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620132039, "job": 165, "event": "flush_started", "num_memtables": 1, "num_entries": 501, "num_deletes": 287, "total_data_size": 454547, "memory_usage": 464376, "flush_reason": "Manual Compaction"}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:885] [default] [JOB 165] Level-0 flush table #257: started
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620136556, "cf_name": "default", "job": 165, "event": "table_file_creation", "file_number": 257, "file_size": 297704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 127504, "largest_seqno": 128000, "table_properties": {"data_size": 295140, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7222, "raw_average_key_size": 19, "raw_value_size": 289573, "raw_average_value_size": 774, "num_data_blocks": 23, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 287, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096603, "oldest_key_time": 1769096603, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 257, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 165] Flush lasted 4644 microseconds, and 2546 cpu microseconds.
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136607) [db/flush_job.cc:967] [default] [JOB 165] Level-0 flush table #257: 297704 bytes OK
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136627) [db/memtable_list.cc:519] [default] Level-0 commit table #257 started
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138360) [db/memtable_list.cc:722] [default] Level-0 commit table #257: memtable #1 done
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138374) EVENT_LOG_v1 {"time_micros": 1769096620138370, "job": 165, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 165] Try to delete WAL files size 451377, prev total WAL file size 451377, number of live WAL files 2.
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000253.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0036303432' seq:72057594037927935, type:22 .. '6C6F676D0036323937' seq:0, type:0; will stop at (end)
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 166] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 165 Base level 0, inputs: [257(290KB)], [255(11MB)]
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620138969, "job": 166, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [257], "files_L6": [255], "score": -1, "input_data_size": 11969339, "oldest_snapshot_seqno": -1}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 166] Generated table #258: 14412 keys, 11805136 bytes, temperature: kUnknown
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620208137, "cf_name": "default", "job": 166, "event": "table_file_creation", "file_number": 258, "file_size": 11805136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11726941, "index_size": 41090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 397078, "raw_average_key_size": 27, "raw_value_size": 11482650, "raw_average_value_size": 796, "num_data_blocks": 1479, "num_entries": 14412, "num_filter_entries": 14412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088931, "oldest_key_time": 0, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b45e9535-17c1-4c17-af76-e2f7345eb341", "db_session_id": "61AVSUXQ8FJR5Z10R2GN", "orig_file_number": 258, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.208348) [db/compaction/compaction_job.cc:1663] [default] [JOB 166] Compacted 1@0 + 1@6 files to L6 => 11805136 bytes
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.209594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.9 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(79.9) write-amplify(39.7) OK, records in: 14995, records dropped: 583 output_compression: NoCompression
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.209608) EVENT_LOG_v1 {"time_micros": 1769096620209602, "job": 166, "event": "compaction_finished", "compaction_time_micros": 69233, "compaction_time_cpu_micros": 28963, "output_level": 6, "num_output_files": 1, "total_output_size": 11805136, "num_input_records": 14995, "num_output_records": 14412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000257.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620209766, "job": 166, "event": "table_file_deletion", "file_number": 257}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000255.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620211937, "job": 166, "event": "table_file_deletion", "file_number": 255}
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: rocksdb: (Original Log Time 2026/01/22-15:43:40.211971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:40 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:42 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:43 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:43.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:44 np0005592158 podman[258973]: 2026-01-22 15:43:44.057396826 +0000 UTC m=+0.047647170 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:43:44 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:45 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:45 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:45.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:46 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:47.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.543 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.544 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:43:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:43:47.544 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:43:47 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:49 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:50 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:51 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:52 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:53.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:53 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:54 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:55 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:56 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:58 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:59.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:43:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:59 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592158 podman[258992]: 2026-01-22 15:44:00.147465982 +0000 UTC m=+0.131294282 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:44:00 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:01.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:01 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:02 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:03.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:03 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:44:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:03 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:44:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:05.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:05 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:06 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:06 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:06 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:07 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:08 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:09.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:09 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:10 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:10 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:11 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:12 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:13.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:13 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:14 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:15 np0005592158 podman[259199]: 2026-01-22 15:44:15.084864498 +0000 UTC m=+0.060972591 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:44:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:15.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:16 np0005592158 ceph-mon[81715]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:16 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:16 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:17.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:17 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:18 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:19.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:20 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:21 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592158 ceph-mon[81715]: Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:21.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:21.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:22 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:23.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:24 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:24 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:25.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:26 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:26 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:27 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:27.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:28 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:29.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:30 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:31 np0005592158 podman[259219]: 2026-01-22 15:44:31.134520071 +0000 UTC m=+0.117725256 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:32 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:34 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:44:34 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.5 total, 600.0 interval#012Cumulative writes: 17K writes, 51K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 17K writes, 6346 syncs, 2.79 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 709 writes, 1322 keys, 709 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 709 writes, 322 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:44:35 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:35.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:36 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:36 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:37 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:38 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:38 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:39.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:40 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:40 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:41 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:41.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:42 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:43.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:43 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:44 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:45 np0005592158 ceph-mon[81715]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:46 np0005592158 podman[259244]: 2026-01-22 15:44:46.092558215 +0000 UTC m=+0.085051831 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:44:47 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.545 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:44:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:44:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:44:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:48 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:49 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:51 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:51 np0005592158 ceph-mon[81715]: Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:51 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:52 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:53 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:53 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:53 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:53 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:54 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:55 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:55 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:55 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:55 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592158 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:56 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:56 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:57 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:57 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:57 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:57 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:58 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:59 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:44:59 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:59 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:59 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:01 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:01 np0005592158 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:01 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:01 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:01 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:01 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:02 np0005592158 podman[259265]: 2026-01-22 15:45:02.116453174 +0000 UTC m=+0.110452178 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:45:02 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:03 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:03 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:03 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:03.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:04 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592158 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:05.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:05 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:05 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:05 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:06 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:06 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:07 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:07 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:08 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:09 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:09 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:09 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:09.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:10 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:45:11 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:11 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:11 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:11 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:12 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:13 np0005592158 ceph-mon[81715]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:13 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:13 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:13 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:14 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:15 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592158 ceph-mon[81715]: Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:15 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:15 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:15 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:16 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:16 np0005592158 podman[259446]: 2026-01-22 15:45:16.505964691 +0000 UTC m=+0.053227071 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:45:16 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:16 np0005592158 ceph-mon[81715]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:17 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:17 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:17 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:17 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:18 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:19.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:19 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:19 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:19 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:20 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:21 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:21.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:21 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:21 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:21 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:21 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:22 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:23.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:23 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:23 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:23 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:24 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:25 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:25 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:25 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:25 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:26 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:26 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:27 np0005592158 systemd-logind[787]: New session 51 of user zuul.
Jan 22 10:45:27 np0005592158 systemd[1]: Started Session 51 of User zuul.
Jan 22 10:45:27 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:27 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:27 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:28 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:28 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:29 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:29 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:29 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:29.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:30 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:30 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3667014604' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:31 np0005592158 ceph-mon[81715]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 23K writes, 129K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 23K writes, 23K syncs, 1.00 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1823 writes, 10K keys, 1823 commit groups, 1.0 writes per commit group, ingest: 16.88 MB, 0.03 MB/s#012Interval WAL: 1823 writes, 1823 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.6      1.71              0.46        83    0.021       0      0       0.0       0.0#012  L6      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   6.0    138.4    120.1      6.83              2.58        82    0.083    926K    51K       0.0       0.0#012 Sum      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.9      0.1       0.0   7.0    110.6    111.9      8.54              3.04       165    0.052    926K    51K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    136.9    137.7      0.57              0.27        12    0.047     91K   4912       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   0.0    138.4    120.1      6.83              2.58        82    0.083    926K    51K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     79.7      1.71              0.46        82    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.133, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.93 GB write, 0.12 MB/s write, 0.92 GB read, 0.12 MB/s read, 8.5 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f7686a91f0#2 capacity: 304.00 MB usage: 96.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000592 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5008,90.63 MB,29.8109%) FilterBlock(165,2.52 MB,0.828045%) IndexBlock(165,3.05 MB,1.00344%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:45:31 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:31 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:31 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:33 np0005592158 podman[259748]: 2026-01-22 15:45:33.192408509 +0000 UTC m=+0.119483704 container health_status 89c9efba157aab60cd0957c44cf442c52d51b95550a74870ab7476805d1b5536 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 10:45:33 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:33 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:33 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:33 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:34 np0005592158 ovs-vsctl[259803]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 10:45:34 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:34 np0005592158 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 10:45:34 np0005592158 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 10:45:34 np0005592158 virtqemud[220928]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 10:45:35 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:35.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:35 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: cache status {prefix=cache status} (starting...)
Jan 22 10:45:35 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:35 np0005592158 lvm[260120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 10:45:35 np0005592158 lvm[260120]: VG ceph_vg0 finished
Jan 22 10:45:35 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: client ls {prefix=client ls} (starting...)
Jan 22 10:45:35 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:35 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:35 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:35 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/624135693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:45:36 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3217916333' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 10:45:36 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4183734222' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: ops {prefix=ops} (starting...)
Jan 22 10:45:37 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3749877864' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1858915271' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 10:45:37 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:37 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:37 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:37 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1030413829' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:38 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: session ls {prefix=session ls} (starting...)
Jan 22 10:45:38 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj Can't run that command on an inactive MDS!
Jan 22 10:45:38 np0005592158 ceph-mds[83358]: mds.cephfs.compute-1.ofmmzj asok_command: status {prefix=status} (starting...)
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1857267379' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2297523029' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:38 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/606502140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3170290335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/158912215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 10:45:39 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1158656698' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 10:45:39 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:39 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:39 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2912699497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/226829701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 10:45:40 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1073452591' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2501336058' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 39813120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133373952 unmapped: 39804928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 39788544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 39780352 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 39763968 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 39747584 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 39739392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252742 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 39723008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 117.671112061s of 117.708808899s, submitted: 13
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2258056 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4db41e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e25000/0x0/0x1bfc00000, data 0x7661c43/0x6c39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2258056 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f2954780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5157800 session 0x55b6f464d860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253662 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133464064 unmapped: 39714816 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4d59a40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680800 session 0x55b6f4fca3c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253342 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133472256 unmapped: 39706624 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4e66000/0x0/0x1bfc00000, data 0x7621c33/0x6bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.354337692s of 18.391880035s, submitted: 10
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4d62000 session 0x55b6f2bdb680
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2255170 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f41cef00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156000 session 0x55b6f227da40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 39698432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 39288832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4d89000/0x0/0x1bfc00000, data 0x76fcc6c/0x6cd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 39288832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b42bf000/0x0/0x1bfc00000, data 0x81c6c6c/0x779f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133922816 unmapped: 39256064 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345981 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b42bf000/0x0/0x1bfc00000, data 0x81c6c6c/0x779f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133931008 unmapped: 39247872 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345981 data_alloc: 218103808 data_used: 18550784
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 39239680 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.394145966s of 13.541505814s, submitted: 38
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4e00780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133980160 unmapped: 39198720 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.5 total, 600.0 interval#012Cumulative writes: 15K writes, 47K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 15K writes, 5211 syncs, 2.94 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 872 writes, 1903 keys, 872 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 872 writes, 408 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6f07e3610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f73da400 session 0x55b6f4dbf4a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133988352 unmapped: 39190528 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5bc00 session 0x55b6f4fca780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2006c00 session 0x55b6f4dda960
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f722f000 session 0x55b6f2ff63c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 39182336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5c2bc00 session 0x55b6f1996d20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f40df000 session 0x55b6f4b1d4a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298041 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 39174144 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459e400 session 0x55b6f4de3860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459ec00 session 0x55b6f2a743c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 39559168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298361 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133619712 unmapped: 39559168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 155.705093384s of 155.759750366s, submitted: 19
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514c400 session 0x55b6f227dc20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e4000/0x0/0x1bfc00000, data 0x7ba0c7c/0x717a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 39510016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514d400 session 0x55b6f29543c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133668864 unmapped: 39510016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c1a/0x7179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133677056 unmapped: 39501824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133685248 unmapped: 39493632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299837 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133685248 unmapped: 39493632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4db5a40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 133783552 unmapped: 39395328 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 38297600 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 38264832 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 134979584 unmapped: 38199296 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 38158336 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.372165680s of 10.111143112s, submitted: 313
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 38141952 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 38125568 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135069696 unmapped: 38109184 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 38100992 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135086080 unmapped: 38092800 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 38084608 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 38076416 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135102464 unmapped: 38076416 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 38068224 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f722f400 session 0x55b6f4dbe1e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135118848 unmapped: 38060032 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135127040 unmapped: 38051840 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2027c00 session 0x55b6f4db4b40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:41 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2764929686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135135232 unmapped: 38043648 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4681400 session 0x55b6f29554a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135143424 unmapped: 38035456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135151616 unmapped: 38027264 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b59400 session 0x55b6f227cf00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f65e2400 session 0x55b6f3ed5680
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231000 session 0x55b6f41ced20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5089000 session 0x55b6f4f60960
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135159808 unmapped: 38019072 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4681800 session 0x55b6f2ed0000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 38002688 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f45b1c00 session 0x55b6f2eb9a40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5ac00 session 0x55b6f47f5e00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135168000 unmapped: 38010880 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680c00 session 0x55b6f3ef2000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135184384 unmapped: 37994496 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f207b000 session 0x55b6f4d59e00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5eda400 session 0x55b6f4224780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135192576 unmapped: 37986304 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 37978112 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f4fcaf00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f29545a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 37773312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 37773312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 313.078704834s of 313.278930664s, submitted: 80
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f219c800 session 0x55b6f4225e00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f65e3400 session 0x55b6f4d58b40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e5000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299834 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 37756928 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 37740544 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 37691392 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135528448 unmapped: 37650432 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 37634048 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 37617664 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135569408 unmapped: 37609472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 37601280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135585792 unmapped: 37593088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 37584896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135602176 unmapped: 37576704 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298797 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135610368 unmapped: 37568512 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f1996b40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f4de34a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 106.694984436s of 106.934410095s, submitted: 93
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f4ddb860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2299117 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135888896 unmapped: 37289984 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b48e6000/0x0/0x1bfc00000, data 0x7ba0c0a/0x7178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5c2c000 session 0x55b6f19961e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 135897088 unmapped: 37281792 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2a0c400 session 0x55b6f3ed45a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137371648 unmapped: 35807232 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.5 total, 600.0 interval#012Cumulative writes: 16K writes, 48K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5677 syncs, 2.87 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 984 writes, 1511 keys, 984 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 984 writes, 466 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2026000 session 0x55b6f47f5680
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2a0d400 session 0x55b6f4fca960
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137445376 unmapped: 35733504 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f40bb800 session 0x55b6f4b1c780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353470 data_alloc: 218103808 data_used: 18554880
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137453568 unmapped: 35725312 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 169.828842163s of 170.185073853s, submitted: 32
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137461760 unmapped: 35717120 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f207bc00 session 0x55b6f4ddb0e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 35659776 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b4210000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 35471360 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137723904 unmapped: 35454976 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5157000 session 0x55b6f45f45a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137732096 unmapped: 35446784 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137740288 unmapped: 35438592 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 35422208 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137764864 unmapped: 35414016 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 35405824 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4680800 session 0x55b6f2aefc20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f73da400 session 0x55b6f2a75680
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b5bc00 session 0x55b6f4224b40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f4d62000 session 0x55b6f47f4f00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5b58c00 session 0x55b6f47f4000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5156c00 session 0x55b6f1996000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459f800 session 0x55b6f2955860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f459ec00 session 0x55b6f4b1c5a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231400 session 0x55b6f45f54a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f7231800 session 0x55b6f4db4d20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f514c400 session 0x55b6f3ef3e00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353006 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f2fb9c00 session 0x55b6f4ddaf00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5400800 session 0x55b6f20dc5a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b3e00000/0x0/0x1bfc00000, data 0x8275c33/0x784e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 435.237915039s of 436.059173584s, submitted: 300
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 ms_handle_reset con 0x55b6f5155400 session 0x55b6f45f43c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352686 data_alloc: 218103808 data_used: 18558976
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b58800 session 0x55b6f4dda780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2007000 session 0x55b6f42252c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2fb9c00 session 0x55b6f4ddad20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f3c65000 session 0x55b6f2bdba40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428763 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f7442c00 session 0x55b6f4de23c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428763 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3469000/0x0/0x1bfc00000, data 0x8c0a7dc/0x81e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.197553635s of 13.351867676s, submitted: 36
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137895936 unmapped: 35282944 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361857 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b58800 session 0x55b6f2eb92c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.5 total, 600.0 interval#012Cumulative writes: 16K writes, 50K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 16K writes, 6024 syncs, 2.82 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 701 writes, 1265 keys, 701 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 701 writes, 347 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 35258368 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f7443000 session 0x55b6f20ddc20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: mgrc ms_handle_reset ms_handle_reset con 0x55b6f5152000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1334415348
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1334415348,v1:192.168.122.100:6801/1334415348]
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: mgrc handle_mgr_configure stats_period=5
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5084000 session 0x55b6f4b1cf00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f4ff0400 session 0x55b6f2b585a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2f87400 session 0x55b6f227d4a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 35250176 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5089000 session 0x55b6f2f721e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5eda000 session 0x55b6f4de2780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f45b1c00 session 0x55b6f4b0e960
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f5b5ac00 session 0x55b6f2b59680
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137338880 unmapped: 35840000 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f4d63c00 session 0x55b6f2f545a0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361097 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfc000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137355264 unmapped: 35823616 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 147.082885742s of 147.144195557s, submitted: 18
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 35799040 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 35790848 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137404416 unmapped: 35774464 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [0,0,0,0,1])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137420800 unmapped: 35758080 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 ms_handle_reset con 0x55b6f2f24800 session 0x55b6f3ed4960
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 35741696 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 35676160 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2361137 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 35627008 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137617408 unmapped: 35561472 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 35553280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137625600 unmapped: 35553280 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137633792 unmapped: 35545088 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2360921 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b3dfd000/0x0/0x1bfc00000, data 0x827777a/0x7851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 35536896 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.592838287s of 61.872817993s, submitted: 329
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 35479552 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2417686 data_alloc: 218103808 data_used: 18567168
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137699328 unmapped: 35479552 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b35fc000/0x0/0x1bfc00000, data 0x8a7779d/0x8052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 181 ms_handle_reset con 0x55b6f5b59800 session 0x55b6f45f41e0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137715712 unmapped: 35463168 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 182 heartbeat osd_stat(store_statfs(0x1b35f6000/0x0/0x1bfc00000, data 0x8a7b0dd/0x8058000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137748480 unmapped: 35430400 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2423818 data_alloc: 218103808 data_used: 18579456
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 182 ms_handle_reset con 0x55b6f5088000 session 0x55b6f227c3c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137781248 unmapped: 35397632 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2369546 data_alloc: 218103808 data_used: 18575360
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 182 heartbeat osd_stat(store_statfs(0x1b3df6000/0x0/0x1bfc00000, data 0x827b0ba/0x7857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 182 ms_handle_reset con 0x55b6f65e2c00 session 0x55b6f2f72780
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.586056709s of 10.986593246s, submitted: 44
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 35389440 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 35381248 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f2f24400 session 0x55b6f47f5860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [1])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f3d2d000 session 0x55b6f473cb40
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f459cc00 session 0x55b6f4225860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5c2c000 session 0x55b6f4b0fc20
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f4f82400 session 0x55b6f3ed4000
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f2006000 session 0x55b6f4481e00
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137805824 unmapped: 35373056 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5156000 session 0x55b6f2f73860
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137814016 unmapped: 35364864 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.5 total, 600.0 interval#012Cumulative writes: 17K writes, 51K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 17K writes, 6346 syncs, 2.79 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 709 writes, 1322 keys, 709 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 709 writes, 322 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 35356672 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 ms_handle_reset con 0x55b6f5157000 session 0x55b6f41ce3c0
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b3df3000/0x0/0x1bfc00000, data 0x827cc16/0x785a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x45af9c6), peers [0,2] op hist [])
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137830400 unmapped: 35348480 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 137912320 unmapped: 35266560 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'config diff' '{prefix=config diff}'
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'config show' '{prefix=config show}'
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 138215424 unmapped: 34963456 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: prioritycache tune_memory target: 4294967296 mapped: 138067968 unmapped: 35110912 heap: 173178880 old mem: 2845415833 new mem: 2845415833
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: bluestore.MempoolThread(0x55b6f08c1b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2373544 data_alloc: 218103808 data_used: 18583552
Jan 22 10:45:41 np0005592158 ceph-osd[79044]: do_command 'log dump' '{prefix=log dump}'
Jan 22 10:45:41 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:41 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:41 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3457684757' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/150362007' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:42 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1894221034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:43 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 10:45:43 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1042555888' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 10:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:43 np0005592158 ceph-mon[81715]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:43 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:43 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:43 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 10:45:44 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2640204771' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 10:45:44 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:44 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 10:45:44 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/527252969' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1922483207' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/192074747' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 10:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2776501217' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1535643559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:45 np0005592158 ceph-mon[81715]: Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:45 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:45 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:45 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/21857564' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2270236522' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4187776781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 10:45:46 np0005592158 systemd[1]: Starting Hostname Service...
Jan 22 10:45:46 np0005592158 systemd[1]: Started Hostname Service.
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3353346489' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 10:45:46 np0005592158 podman[261836]: 2026-01-22 15:45:46.894983694 +0000 UTC m=+0.154553681 container health_status 49bd518ae0a42e556655447f39518daca30e24e9bf9c50a5c924797aece90b69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-1af6f1d8aff87e18db3fe6da87805b7db2a356193ed4aae0212174970f9b887c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 10:45:46 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/327179314' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3986922559' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/715362271' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.546 139715 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.547 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:45:47 np0005592158 ovn_metadata_agent[139710]: 2026-01-22 15:45:47.547 139715 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 10:45:47 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268155440' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 10:45:47 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:47 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:47 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:47.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:48 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:48 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 10:45:48 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1842434873' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 10:45:49 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352944826' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 10:45:49 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:49 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 10:45:49 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3017593338' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 10:45:49 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:49 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:49 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2359320859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:50 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473160357' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 10:45:51 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/47089867' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 10:45:51 np0005592158 radosgw[82426]: ====== starting new request req=0x7fdbb44d66f0 =====
Jan 22 10:45:51 np0005592158 radosgw[82426]: ====== req done req=0x7fdbb44d66f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:51 np0005592158 radosgw[82426]: beast: 0x7fdbb44d66f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:51.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3034816724' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 22 10:45:52 np0005592158 ceph-mon[81715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3037175596' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
